content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: RuntimeError: Unsuccessful TensorSliceReader constructor Failed to find any matching files for Tensorflow/workspace/models/myssd_mobnet/./model.ckpt-5 I am trying to do Automatic number plate recognition training using google colab and during the process I run this line : # Load pipeline config and build a detection model configs = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG']) detection_model = model_builder.build(model_config=configs['model'], is_training=False) # Restore checkpoint ckpt = tf.compat.v2.train.Checkpoint(model=detection_model) ckpt.restore(os.path.join(paths['CHECKPOINT_PATH'], 'ckpt-5')).expect_partial() @tf.function def detect_fn(image): image, shapes = detection_model.preprocess(image) prediction_dict = detection_model.predict(image, shapes) detections = detection_model.postprocess(prediction_dict, shapes) return detections and i got the following error: NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5 NotFoundError: Error when restoring from checkpoint or SavedModel at Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5 Please double-check that the path is correct. You may be missing the checkpoint suffix (e.g. the '-1' in 'path/to/ckpt-1'). **# i tried to change the path but i got the same error, any suggestions ? ** A: It looks like you're encountering an error while trying to load a checkpoint file in TensorFlow. The error message indicates that TensorFlow was unable to find any matching files for the checkpoint you're trying to load (Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5). There are a few possible reasons why this error could be occurring. One possibility is that the path you're using to load the checkpoint file is incorrect. It's important to make sure that you're using the correct path to the checkpoint file, and that the path is specified correctly in your code. Another possibility is that the checkpoint file you're trying to load does not exist. This could happen if the checkpoint file was not saved correctly, or if it has been moved or deleted. In this case, you will need to either restore the checkpoint file or train a new model and save the checkpoint. Finally, it's possible that the checkpoint file you're trying to load is not compatible with the model you're using. This could happen if you've changed the model's architecture or hyperparameters since the checkpoint was saved. In this case, you will need to either retrain the model using the same architecture and hyperparameters as the checkpoint, or load a different checkpoint file that is compatible with the current model.
RuntimeError: Unsuccessful TensorSliceReader constructor Failed to find any matching files for Tensorflow/workspace/models/myssd_mobnet/./model.ckpt-5
I am trying to do Automatic number plate recognition training using google colab and during the process I run this line : # Load pipeline config and build a detection model configs = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG']) detection_model = model_builder.build(model_config=configs['model'], is_training=False) # Restore checkpoint ckpt = tf.compat.v2.train.Checkpoint(model=detection_model) ckpt.restore(os.path.join(paths['CHECKPOINT_PATH'], 'ckpt-5')).expect_partial() @tf.function def detect_fn(image): image, shapes = detection_model.preprocess(image) prediction_dict = detection_model.predict(image, shapes) detections = detection_model.postprocess(prediction_dict, shapes) return detections and i got the following error: NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5 NotFoundError: Error when restoring from checkpoint or SavedModel at Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5 Please double-check that the path is correct. You may be missing the checkpoint suffix (e.g. the '-1' in 'path/to/ckpt-1'). **# i tried to change the path but i got the same error, any suggestions ? **
[ "It looks like you're encountering an error while trying to load a checkpoint file in TensorFlow. The error message indicates that TensorFlow was unable to find any matching files for the checkpoint you're trying to load (Tensorflow/workspace/models/my_ssd_mobnet/./model.ckpt-5).\nThere are a few possible reasons why this error could be occurring. One possibility is that the path you're using to load the checkpoint file is incorrect. It's important to make sure that you're using the correct path to the checkpoint file, and that the path is specified correctly in your code.\nAnother possibility is that the checkpoint file you're trying to load does not exist. This could happen if the checkpoint file was not saved correctly, or if it has been moved or deleted. In this case, you will need to either restore the checkpoint file or train a new model and save the checkpoint.\nFinally, it's possible that the checkpoint file you're trying to load is not compatible with the model you're using. This could happen if you've changed the model's architecture or hyperparameters since the checkpoint was saved. In this case, you will need to either retrain the model using the same architecture and hyperparameters as the checkpoint, or load a different checkpoint file that is compatible with the current model.\n" ]
[ 0 ]
[]
[]
[ "automatic_license_plate_recognition", "object_detection", "training_data" ]
stackoverflow_0074677721_automatic_license_plate_recognition_object_detection_training_data.txt
Q: query in java change date format I have Query written in java code that are group some rows with date it's lookin smth that are like this : protected String groupBy() { return " GROUP BY \"calendar_cte\".\"date\" "; } the date format returned is :“time_date”: “2022-11-14" and I want date looking like : Nov, 14 2022 using springboot framework in java How can I do this ? can I use Parser ?
query in java change date format
I have Query written in java code that are group some rows with date it's lookin smth that are like this : protected String groupBy() { return " GROUP BY \"calendar_cte\".\"date\" "; } the date format returned is :“time_date”: “2022-11-14" and I want date looking like : Nov, 14 2022 using springboot framework in java How can I do this ? can I use Parser ?
[]
[]
[ "Here is an example\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\n\npublic class DateConverter {\n public static String convert(String input) {\n // Parse the input date string\n SimpleDateFormat inputFormat = new SimpleDateFormat(\"yyyy-MM-dd\");\n Date date = inputFormat.parse(input);\n\n // Use a SimpleDateFormat to convert the date to the desired output format\n SimpleDateFormat outputFormat = new SimpleDateFormat(\"MMM, dd yyyy\");\n return outputFormat.format(date);\n }\n\n public static void main(String[] args) {\n String output = convert(\"2022-11-14\");\n System.out.println(output); // prints \"Nov, 14 2022\"\n }\n}\n\nNote that this code uses the java.text.SimpleDateFormat class to parse and format the date, and the java.util.Date class to represent the date itself.\nThis code is just one possible solution, and there may be other ways to accomplish the same task. You may want to modify this code to suit your specific needs.\n" ]
[ -1 ]
[ "java", "spring", "spring_boot" ]
stackoverflow_0074677055_java_spring_spring_boot.txt
Q: How do I make it so that when I click an icon it opens another page file in flutter How do I make it so that when I click an icon it opens another page file in flutter? I have this icons which when you click them it redirects you to a url, I want to make it so when you click one specific icon instead of opening a url it opens another page file, acting like a navigator.push... But when I add an ontap to my taskcard I get an error, I had set the pageUrl = "", but it didn't return anything so I removed the this.required pageUrl and changed to this.pageUrl and now I have this error The parameter 'pageUrl' can't have a value of 'null' because of its type, but the implicit default value is 'null', my code is like this: import 'dart:ui'; import 'package:url_launcher/url_launcher.dart'; import ''; import 'dart:async'; import 'package:flutter/material.dart'; import 'package:schoolmanagement/nav_bar.dart'; class DinningScreen extends StatefulWidget { const DinningScreen({super.key}); @override State<DinningScreen> createState() => _DinningState(); } class _DinningState extends State<DinningScreen> { final GlobalKey<ScaffoldState> scaffoldKey = GlobalKey(); @override Widget build(BuildContext context) { return Scaffold( drawer: NavBar(), key: scaffoldKey, appBar: AppBar(...), body: Container( decoration: BoxDecoration( gradient: LinearGradient( colors: [Color(0xffF6FECE), Color(0xffB6C0C8)], begin: Alignment.bottomCenter, end: Alignment.topCenter, tileMode: TileMode.clamp), ), //Here we set the "Manage your ... box and it's properties" padding: const EdgeInsets.all(12.0), child: SingleChildScrollView( child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ Container(...), SizedBox( height: 20.0, ), Text( "Sections", style: TextStyle( fontSize: 20.0, fontWeight: FontWeight.bold, fontFamily: "SpaceGrotesk", color: Colors.black), ), //Here we set the "Shortcuts" //If you click Teachers it will take you the page where you can see the Teachers - //names a nd availabity alongs side the subject they teach //If you click EduPage it takes you to edupage //If you click Timetable it takes you to the Timetable generator //If you click Messages it asks you to join a messenger Gc of Students of your class Row( children: [ Expanded( child: TaskCard( label: "Teachers", pageUrl: "", )), Expanded( child: TaskCard( imageUrl: "assets/school-bag.png", label: "EduPage", pageUrl: "https://willowcosta.edupage.org", )), //This is what I want to change from going to url to another page Expanded( child: InkWell( onTap: () { Navigator.push( context, MaterialPageRoute(builder: (context) => HomeScreen()), ); }, child: TaskCard( imageUrl: "assets/timetable.png", pageUrl: "", label: "Timetable", ), )), Expanded( child: TaskCard( imageUrl: "assets/message.png", pageUrl: "https://www.messenger.com", label: "Messages", )), ], ), //Here we set the tasks that we have const SizedBox( height: 20.0, ), const Text( "You have 6 tasks for this week", style: TextStyle( fontSize: 20.0, fontWeight: FontWeight.bold, fontFamily: "SpaceGrotesk", color: Colors.black), ), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const SizedBox( height: 100.0, ), ], ), ), ), The TaskCard definition is here: class TaskCard extends StatelessWidget { final String? imageUrl; final String? label; final String pageUrl; const TaskCard( {Key? key, this.imageUrl, required this.label, required this.pageUrl}) : super(key: key); //Function to launch the selected url Future<void> goToWebPage(String urlString) async { final Uri _url = Uri.parse(urlString); if (!await launchUrl(_url)) { throw 'Could not launch $_url'; } } @override Widget build(BuildContext context) { return Padding( //Here we set the properties of our Sections (Teachers etc) padding: const EdgeInsets.all(8.0), child: Column( children: [ Container( height: 80.0, width: 76.1, decoration: BoxDecoration( color: Colors.white, borderRadius: BorderRadius.circular(20.0), boxShadow: [ BoxShadow( color: Colors.grey, blurRadius: 2.0, spreadRadius: 0.5), ]), child: IconButton( onPressed: () async { if(pageUrl !=""){ await goToWebPage(pageUrl); } }, icon: Image.asset( imageUrl ?? "assets/teacher.png", height: 75.0, width: 70.0, ), ), ), SizedBox( height: 10.0, ), Text( label ?? "", style: TextStyle(fontSize: 16.0), ) ], ), ); } } A: The parameter 'pageUrl' can't have a value of 'null' because of its type, but the implicit default value is 'null'. Check whether the pageUrl is an empty String. If it is an empty String, don't call goToWebPage. onPressed: () async { if(pageUrl !=""){ await goToWebPage(pageUrl); } },
How do I make it so that when I click an icon it opens another page file in flutter
How do I make it so that when I click an icon it opens another page file in flutter? I have this icons which when you click them it redirects you to a url, I want to make it so when you click one specific icon instead of opening a url it opens another page file, acting like a navigator.push... But when I add an ontap to my taskcard I get an error, I had set the pageUrl = "", but it didn't return anything so I removed the this.required pageUrl and changed to this.pageUrl and now I have this error The parameter 'pageUrl' can't have a value of 'null' because of its type, but the implicit default value is 'null', my code is like this: import 'dart:ui'; import 'package:url_launcher/url_launcher.dart'; import ''; import 'dart:async'; import 'package:flutter/material.dart'; import 'package:schoolmanagement/nav_bar.dart'; class DinningScreen extends StatefulWidget { const DinningScreen({super.key}); @override State<DinningScreen> createState() => _DinningState(); } class _DinningState extends State<DinningScreen> { final GlobalKey<ScaffoldState> scaffoldKey = GlobalKey(); @override Widget build(BuildContext context) { return Scaffold( drawer: NavBar(), key: scaffoldKey, appBar: AppBar(...), body: Container( decoration: BoxDecoration( gradient: LinearGradient( colors: [Color(0xffF6FECE), Color(0xffB6C0C8)], begin: Alignment.bottomCenter, end: Alignment.topCenter, tileMode: TileMode.clamp), ), //Here we set the "Manage your ... box and it's properties" padding: const EdgeInsets.all(12.0), child: SingleChildScrollView( child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ Container(...), SizedBox( height: 20.0, ), Text( "Sections", style: TextStyle( fontSize: 20.0, fontWeight: FontWeight.bold, fontFamily: "SpaceGrotesk", color: Colors.black), ), //Here we set the "Shortcuts" //If you click Teachers it will take you the page where you can see the Teachers - //names a nd availabity alongs side the subject they teach //If you click EduPage it takes you to edupage //If you click Timetable it takes you to the Timetable generator //If you click Messages it asks you to join a messenger Gc of Students of your class Row( children: [ Expanded( child: TaskCard( label: "Teachers", pageUrl: "", )), Expanded( child: TaskCard( imageUrl: "assets/school-bag.png", label: "EduPage", pageUrl: "https://willowcosta.edupage.org", )), //This is what I want to change from going to url to another page Expanded( child: InkWell( onTap: () { Navigator.push( context, MaterialPageRoute(builder: (context) => HomeScreen()), ); }, child: TaskCard( imageUrl: "assets/timetable.png", pageUrl: "", label: "Timetable", ), )), Expanded( child: TaskCard( imageUrl: "assets/message.png", pageUrl: "https://www.messenger.com", label: "Messages", )), ], ), //Here we set the tasks that we have const SizedBox( height: 20.0, ), const Text( "You have 6 tasks for this week", style: TextStyle( fontSize: 20.0, fontWeight: FontWeight.bold, fontFamily: "SpaceGrotesk", color: Colors.black), ), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const TaskContainer(), const SizedBox( height: 100.0, ), ], ), ), ), The TaskCard definition is here: class TaskCard extends StatelessWidget { final String? imageUrl; final String? label; final String pageUrl; const TaskCard( {Key? key, this.imageUrl, required this.label, required this.pageUrl}) : super(key: key); //Function to launch the selected url Future<void> goToWebPage(String urlString) async { final Uri _url = Uri.parse(urlString); if (!await launchUrl(_url)) { throw 'Could not launch $_url'; } } @override Widget build(BuildContext context) { return Padding( //Here we set the properties of our Sections (Teachers etc) padding: const EdgeInsets.all(8.0), child: Column( children: [ Container( height: 80.0, width: 76.1, decoration: BoxDecoration( color: Colors.white, borderRadius: BorderRadius.circular(20.0), boxShadow: [ BoxShadow( color: Colors.grey, blurRadius: 2.0, spreadRadius: 0.5), ]), child: IconButton( onPressed: () async { if(pageUrl !=""){ await goToWebPage(pageUrl); } }, icon: Image.asset( imageUrl ?? "assets/teacher.png", height: 75.0, width: 70.0, ), ), ), SizedBox( height: 10.0, ), Text( label ?? "", style: TextStyle(fontSize: 16.0), ) ], ), ); } }
[ "\nThe parameter 'pageUrl' can't have a value of 'null' because of its\ntype, but the implicit default value is 'null'.\n\nCheck whether the pageUrl is an empty String. If it is an empty String, don't call goToWebPage.\nonPressed: () async {\n if(pageUrl !=\"\"){\n await goToWebPage(pageUrl);\n }\n },\n\n" ]
[ 1 ]
[]
[]
[ "dart", "flutter", "navigator", "ontap" ]
stackoverflow_0074677635_dart_flutter_navigator_ontap.txt
Q: Pandas: Better Way to Group By and Find Mean I have a spreadsheet of stock prices for all companies, and I'd like to calculate the moving average more efficiently. As it stands I have some code that works, but takes a pretty long time to run. I'm wondering what are alternative ways to do the same thing, but more efficiently, or in a way that utilizes Pandas' strengths. Here is the workflow I am trying to accomplish in my code: I first want to take the 20 day rolling/moving average for each company, and add it as a column to the dataframe (sma_20). From there I want to count the number of days a stock's price was over this 20 day average. Finally, I want to convert this count into a percentage. For reference, there are 252 days in a trading year, I'd like to see out of these 252 days, how many of them was the stock trading above it's moving average. prices_df['sma_20'] = prices_df.groupby('ticker').rolling(20)['closeadj'].mean().reset_index(0,drop=True) prices_df['above_sma_20'] = np.where(prices_df.closeadj > prices_df.sma_20, 1, 0) prices_df['above_sma_20_count'] = prices_df.groupby('ticker').rolling(252)['above_sma_20'].sum().reset_index(0,drop=True) prices_df['above_sma_20_pct'] = prices_df['above_sma_20_count'] / 252 A: I would rearrange the data into n(date) by m(ticker) array, and use numpy to deal with rolling mean, Given a df with 100 companies and 253 days from yahoo finance, import pandas as pd import numpy as np df_n = df.to_numpy() sma_20 = np.cumsum(df_n, dtype=float, axis=0) sma_20[20:] = sma_20[20:] - sma_20[:-20] sma_20[19:] = sma_20[19:] / 20 sma_20[:19] = sma_20[:19] / np.arange(1, 20)[:, None] print(sum(df_n > sma_20)/len(df_n)) >>> [0.41897233 0.61660079 0.7312253 0.71936759 0.74703557 0.743083 0.52964427 0.53359684 0.52964427 0.45849802 0.64031621 0.63241107 0.59683794 0.66798419 0.77470356 0.56521739 0.64426877 0.60869565 0.46640316 0.45059289 0.61660079 0.743083 0.69565217 0.56916996 0.63241107 0.69565217 0.55731225 0.6284585 0.60869565 0.66798419 0.59683794 0.56126482 0.62055336 0.65612648 0.54150198 0.46245059 0.62055336 0.54545455 0.54545455 0.68379447 0.59683794 0.50988142 0.81422925 0.65217391 0.60869565 0.66798419 0.56126482 0.57312253 0.74703557 0.64822134 0.44664032 0.67588933 0.6284585 0.61264822 0.60474308 0.50197628 0.58498024 0.54545455 0.65612648 0.61660079 0.66007905 0.64822134 0.60869565 0.58893281 0.68774704 0.66403162 0.50988142 0.62055336 0.4743083 0.53754941 0.60869565 0.62055336 0.60869565 0.743083 0.43873518 0.6916996 0.71936759 0.61264822 0.59288538 0.49011858 0.58102767 0.5256917 0.59288538 0.45454545 0.49407115 0.55335968 0.49011858 0.64031621 0.6798419 0.54150198 0.59683794 0.67588933 0.56126482 0.60474308 0.45454545 0.61264822 0.56521739 0.48221344 0.40711462 0.68379447] Assign the probability and corresponding company to a new dataframe, df_result = pd.DataFrame(sum(df_n > sma_20)/len(df_n), columns=['probability']) df_result['company'] = df.columns df_result = df_result.sort_values(by='probability', ascending=False).reset_index(drop=True) df_result ### probability company 0 0.814229 FTNT 1 0.774704 ASML 2 0.747036 INTU 3 0.747036 GOOGL 4 0.743083 AVGO .. ... ... 95 0.450593 BIIB 96 0.446640 JD 97 0.438735 PCAR 98 0.418972 ATVI 99 0.407115 ZM [100 rows x 2 columns]
Pandas: Better Way to Group By and Find Mean
I have a spreadsheet of stock prices for all companies, and I'd like to calculate the moving average more efficiently. As it stands I have some code that works, but takes a pretty long time to run. I'm wondering what are alternative ways to do the same thing, but more efficiently, or in a way that utilizes Pandas' strengths. Here is the workflow I am trying to accomplish in my code: I first want to take the 20 day rolling/moving average for each company, and add it as a column to the dataframe (sma_20). From there I want to count the number of days a stock's price was over this 20 day average. Finally, I want to convert this count into a percentage. For reference, there are 252 days in a trading year, I'd like to see out of these 252 days, how many of them was the stock trading above it's moving average. prices_df['sma_20'] = prices_df.groupby('ticker').rolling(20)['closeadj'].mean().reset_index(0,drop=True) prices_df['above_sma_20'] = np.where(prices_df.closeadj > prices_df.sma_20, 1, 0) prices_df['above_sma_20_count'] = prices_df.groupby('ticker').rolling(252)['above_sma_20'].sum().reset_index(0,drop=True) prices_df['above_sma_20_pct'] = prices_df['above_sma_20_count'] / 252
[ "I would rearrange the data into n(date) by m(ticker) array, and use numpy to deal with rolling mean,\nGiven a df with 100 companies and 253 days from yahoo finance,\n\nimport pandas as pd\nimport numpy as np\n\ndf_n = df.to_numpy()\nsma_20 = np.cumsum(df_n, dtype=float, axis=0)\nsma_20[20:] = sma_20[20:] - sma_20[:-20]\nsma_20[19:] = sma_20[19:] / 20\nsma_20[:19] = sma_20[:19] / np.arange(1, 20)[:, None]\n\n\n\n\nprint(sum(df_n > sma_20)/len(df_n))\n>>>\n[0.41897233 0.61660079 0.7312253 0.71936759 0.74703557 0.743083\n 0.52964427 0.53359684 0.52964427 0.45849802 0.64031621 0.63241107\n 0.59683794 0.66798419 0.77470356 0.56521739 0.64426877 0.60869565\n 0.46640316 0.45059289 0.61660079 0.743083 0.69565217 0.56916996\n 0.63241107 0.69565217 0.55731225 0.6284585 0.60869565 0.66798419\n 0.59683794 0.56126482 0.62055336 0.65612648 0.54150198 0.46245059\n 0.62055336 0.54545455 0.54545455 0.68379447 0.59683794 0.50988142\n 0.81422925 0.65217391 0.60869565 0.66798419 0.56126482 0.57312253\n 0.74703557 0.64822134 0.44664032 0.67588933 0.6284585 0.61264822\n 0.60474308 0.50197628 0.58498024 0.54545455 0.65612648 0.61660079\n 0.66007905 0.64822134 0.60869565 0.58893281 0.68774704 0.66403162\n 0.50988142 0.62055336 0.4743083 0.53754941 0.60869565 0.62055336\n 0.60869565 0.743083 0.43873518 0.6916996 0.71936759 0.61264822\n 0.59288538 0.49011858 0.58102767 0.5256917 0.59288538 0.45454545\n 0.49407115 0.55335968 0.49011858 0.64031621 0.6798419 0.54150198\n 0.59683794 0.67588933 0.56126482 0.60474308 0.45454545 0.61264822\n 0.56521739 0.48221344 0.40711462 0.68379447]\n\n\n\n\nAssign the probability and corresponding company to a new dataframe,\ndf_result = pd.DataFrame(sum(df_n > sma_20)/len(df_n), columns=['probability'])\ndf_result['company'] = df.columns\ndf_result = df_result.sort_values(by='probability', ascending=False).reset_index(drop=True)\ndf_result\n###\n probability company\n0 0.814229 FTNT\n1 0.774704 ASML\n2 0.747036 INTU\n3 0.747036 GOOGL\n4 0.743083 AVGO\n.. ... ...\n95 0.450593 BIIB\n96 0.446640 JD\n97 0.438735 PCAR\n98 0.418972 ATVI\n99 0.407115 ZM\n\n[100 rows x 2 columns]\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074668648_pandas_python.txt
Q: Optimizing gaussian heatmap generation I have a set of 68 keypoints (size [68, 2]) that I am mapping to gaussian heatmaps. To do this, I have the following function: def generate_gaussian(t, x, y, sigma=10): """ Generates a 2D Gaussian point at location x,y in tensor t. x should be in range (-1, 1). sigma is the standard deviation of the generated 2D Gaussian. """ h,w = t.shape # Heatmap pixel per output pixel mu_x = int(0.5 * (x + 1.) * w) mu_y = int(0.5 * (y + 1.) * h) tmp_size = sigma * 3 # Top-left x1,y1 = int(mu_x - tmp_size), int(mu_y - tmp_size) # Bottom right x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1) if x1 >= w or y1 >= h or x2 < 0 or y2 < 0: return t size = 2 * tmp_size + 1 tx = np.arange(0, size, 1, np.float32) ty = tx[:, np.newaxis] x0 = y0 = size // 2 # The gaussian is not normalized, we want the center value to equal 1 g = torch.tensor(np.exp(- ((tx - x0) ** 2 + (ty - y0) ** 2) / (2 * sigma ** 2))) # Determine the bounds of the source gaussian g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1 g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1 # Image range img_x_min, img_x_max = max(0, x1), min(x2, w) img_y_min, img_y_max = max(0, y1), min(y2, h) t[img_y_min:img_y_max, img_x_min:img_x_max] = \ g[g_y_min:g_y_max, g_x_min:g_x_max] return t def rescale(a, img_size): # scale tensor to [-1, 1] return 2 * a / img_size[0] - 1 My current code uses a for loop to compute the gaussian heatmap for each of the 68 keypoint coordinates, then stacks the resulting tensors to create a [68, H, W] tensor: x_k1 = [generate_gaussian(torch.zeros(H, W), x, y) for x, y in rescale(kp1.numpy(), frame.shape)] x_k1 = torch.stack(x_k1, dim=0) However, this method is super slow. Is there some way that I can do this without a for loop? Edit: I tried @Cris Luengo's proposal to compute a 1D Gaussian: def generate_gaussian1D(t, x, y, sigma=10): h,w = t.shape # Heatmap pixel per output pixel mu_x = int(0.5 * (x + 1.) * w) mu_y = int(0.5 * (y + 1.) * h) tmp_size = sigma * 3 # Top-left x1, y1 = int(mu_x - tmp_size), int(mu_y - tmp_size) # Bottom right x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1) if x1 >= w or y1 >= h or x2 < 0 or y2 < 0: return t size = 2 * tmp_size + 1 tx = np.arange(0, size, 1, np.float32) ty = tx[:, np.newaxis] x0 = y0 = size // 2 g = torch.tensor(np.exp(-np.power(tx - mu_x, 2.) / (2 * np.power(sigma, 2.)))) g = g * g[:, None] g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1 g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1 img_x_min, img_x_max = max(0, x1), min(x2, w) img_y_min, img_y_max = max(0, y1), min(y2, h) t[img_y_min:img_y_max, img_x_min:img_x_max] = \ g[g_y_min:g_y_max, g_x_min:g_x_max] return t but my output ends up being an incomplete gaussian. I'm not sure what I'm doing wrong. Any help would be appreciated. A: You generate an NxN array g with a Gaussian centered on its center pixel. N is computed such that it extends by 3*sigma from that center pixel. This is the fastest way to build such an array: tmp_size = sigma * 3 tx = np.arange(1, tmp_size + 1, 1, np.float32) g = np.exp(-(tx**2) / (2 * sigma**2)) g = np.concatenate((np.flip(g), [1], g)) g = g * g[:, None] What we're doing here is compute half a 1D Gaussian. We don't even bother computing the value of the Gaussian for the middle pixel, which we know will be 1. We then build the full 1D Gaussian by flipping our half-Gaussian and concatenating. Finally, the 2D Gaussian is built by the outer product of the 1D Gaussian with itself. We could shave a bit of extra time by building a quarter of the 2D Gaussian, then concatenating four rotated copies of it. But the difference in computational cost is not very large, and this is much simpler. Note that np.exp is the most expensive operation here by far, so just minimizing how often we call it we significantly reduce the computational cost. However, the best way to speed up the complete code is to compute the array g only once, rather than anew for each key point. Note how your sigma doesn't change, so all the arrays g that are computed are identical. If you compute it only once, it no longer matters which method you use to compute it, since this will be a minimal portion of the total program anyway. You could, for example, have a global variable _gaussian to hold your array, and have your function compute it only the first time it is called. Or you could separate your function into two functions, one that constructs this array, and one that copies it into an image, and call them as follows: g = create_gaussian(sigma=3) x_k1 = [ copy_gaussian(torch.zeros(H, W), x, y, g) for x, y in rescale(kp1.numpy(), frame.shape) ] On the other hand, you're likely best off using existing functionality. For example, DIPlib has a function dip.DrawBandlimitedPoint() [disclosure: I'm an author] that adds a Gaussian blob to an image. Likely you'll find similar functions in other libraries.
Optimizing gaussian heatmap generation
I have a set of 68 keypoints (size [68, 2]) that I am mapping to gaussian heatmaps. To do this, I have the following function: def generate_gaussian(t, x, y, sigma=10): """ Generates a 2D Gaussian point at location x,y in tensor t. x should be in range (-1, 1). sigma is the standard deviation of the generated 2D Gaussian. """ h,w = t.shape # Heatmap pixel per output pixel mu_x = int(0.5 * (x + 1.) * w) mu_y = int(0.5 * (y + 1.) * h) tmp_size = sigma * 3 # Top-left x1,y1 = int(mu_x - tmp_size), int(mu_y - tmp_size) # Bottom right x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1) if x1 >= w or y1 >= h or x2 < 0 or y2 < 0: return t size = 2 * tmp_size + 1 tx = np.arange(0, size, 1, np.float32) ty = tx[:, np.newaxis] x0 = y0 = size // 2 # The gaussian is not normalized, we want the center value to equal 1 g = torch.tensor(np.exp(- ((tx - x0) ** 2 + (ty - y0) ** 2) / (2 * sigma ** 2))) # Determine the bounds of the source gaussian g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1 g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1 # Image range img_x_min, img_x_max = max(0, x1), min(x2, w) img_y_min, img_y_max = max(0, y1), min(y2, h) t[img_y_min:img_y_max, img_x_min:img_x_max] = \ g[g_y_min:g_y_max, g_x_min:g_x_max] return t def rescale(a, img_size): # scale tensor to [-1, 1] return 2 * a / img_size[0] - 1 My current code uses a for loop to compute the gaussian heatmap for each of the 68 keypoint coordinates, then stacks the resulting tensors to create a [68, H, W] tensor: x_k1 = [generate_gaussian(torch.zeros(H, W), x, y) for x, y in rescale(kp1.numpy(), frame.shape)] x_k1 = torch.stack(x_k1, dim=0) However, this method is super slow. Is there some way that I can do this without a for loop? Edit: I tried @Cris Luengo's proposal to compute a 1D Gaussian: def generate_gaussian1D(t, x, y, sigma=10): h,w = t.shape # Heatmap pixel per output pixel mu_x = int(0.5 * (x + 1.) * w) mu_y = int(0.5 * (y + 1.) * h) tmp_size = sigma * 3 # Top-left x1, y1 = int(mu_x - tmp_size), int(mu_y - tmp_size) # Bottom right x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1) if x1 >= w or y1 >= h or x2 < 0 or y2 < 0: return t size = 2 * tmp_size + 1 tx = np.arange(0, size, 1, np.float32) ty = tx[:, np.newaxis] x0 = y0 = size // 2 g = torch.tensor(np.exp(-np.power(tx - mu_x, 2.) / (2 * np.power(sigma, 2.)))) g = g * g[:, None] g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1 g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1 img_x_min, img_x_max = max(0, x1), min(x2, w) img_y_min, img_y_max = max(0, y1), min(y2, h) t[img_y_min:img_y_max, img_x_min:img_x_max] = \ g[g_y_min:g_y_max, g_x_min:g_x_max] return t but my output ends up being an incomplete gaussian. I'm not sure what I'm doing wrong. Any help would be appreciated.
[ "You generate an NxN array g with a Gaussian centered on its center pixel. N is computed such that it extends by 3*sigma from that center pixel. This is the fastest way to build such an array:\ntmp_size = sigma * 3\ntx = np.arange(1, tmp_size + 1, 1, np.float32)\ng = np.exp(-(tx**2) / (2 * sigma**2))\ng = np.concatenate((np.flip(g), [1], g))\ng = g * g[:, None]\n\nWhat we're doing here is compute half a 1D Gaussian. We don't even bother computing the value of the Gaussian for the middle pixel, which we know will be 1. We then build the full 1D Gaussian by flipping our half-Gaussian and concatenating. Finally, the 2D Gaussian is built by the outer product of the 1D Gaussian with itself.\nWe could shave a bit of extra time by building a quarter of the 2D Gaussian, then concatenating four rotated copies of it. But the difference in computational cost is not very large, and this is much simpler. Note that np.exp is the most expensive operation here by far, so just minimizing how often we call it we significantly reduce the computational cost.\n\nHowever, the best way to speed up the complete code is to compute the array g only once, rather than anew for each key point. Note how your sigma doesn't change, so all the arrays g that are computed are identical. If you compute it only once, it no longer matters which method you use to compute it, since this will be a minimal portion of the total program anyway.\nYou could, for example, have a global variable _gaussian to hold your array, and have your function compute it only the first time it is called. Or you could separate your function into two functions, one that constructs this array, and one that copies it into an image, and call them as follows:\ng = create_gaussian(sigma=3)\nx_k1 = [\n copy_gaussian(torch.zeros(H, W), x, y, g)\n for x, y in rescale(kp1.numpy(), frame.shape)\n]\n\n\nOn the other hand, you're likely best off using existing functionality. For example, DIPlib has a function dip.DrawBandlimitedPoint() [disclosure: I'm an author] that adds a Gaussian blob to an image. Likely you'll find similar functions in other libraries.\n" ]
[ 0 ]
[]
[]
[ "computer_vision", "python", "pytorch" ]
stackoverflow_0074666177_computer_vision_python_pytorch.txt
Q: IntelliJ IDEA not recognizing classes specified in Maven dependencies I have a project in IntelliJ IDEA which was created with Maven. I then specified a set of dependencies and external repositories in the Pom.xml file. The project builds fine on command line if I do mvn install. When I open any of the code files in the IDE though it says all the classes handled by Maven dependencies aren't recognized - as it would for a normal project if I never added the required JARs to the build path. I know in my Eclipse Maven projects (rather than IntelliJ) it usually shows an extra directory on the left which says "Maven Dependencies" and lists the JARs pulled in via maven. I don't see that here. What am I doing wrong? Here's what my screen looks like: A: Right click on the pom.xml -> Add as Maven project -> Reimport A: For some reason re-import did not do the trick. After looking at this: http://www.jetbrains.com/idea/webhelp/maven-importing.html I set auto-import and then the problem went away though; hopefully it helps someone else. Thanks for the ideas everyone :). A: After installing IntelliJ IDEA on a new computer I found myself with the same issue. I had to update the remote maven repository. (Settings > Maven > Repositories) Both local and remote repos needed to be updated. The remote one wasn't updated ever before this. After a restart everything worked fine. You might have to reimport your project. A: You could go to: File > Settings > Build, Execution, Deployment > Build Tools > Maven and check if your pom.xml is not in the "Ignored Files" list. A: I was running into similar issues. For me it ended up being that I was importing the project incorrectly. I had been doing import project <navigate to existing project and click top level directory> follow the wizard What solved my maven problems was to do import project <navigate to existing project and click the pom.xml follow the wizard A: For me File>>Invalidate Cache/Restart>>Invalidate and Restart worked in IntelliJ A: Idea cannot download all dependent jar packages using maven,try the following operations: mvn -U idea:idea then all the dependent jar packages are download from the maven repository A: A simple reimport and/or update of the repositories via Intellij did not do the trick for me. Instead I had to delete the complete ~/.m2/repository directory and let maven sort everything out by itself. Afterwards Maven -> Reimport finished it off. A: I've encountered a similar issue after refactoring my maven project into different modules. Re-importing on it's own usually doesn't work, but I've found that deleting the .iml files and then re-importing usually does the trick. A: Ran into the "same" issue some days ago. It might not be related as my issue was more specific to Spring boot but as I was struggling with it and tried every solution on this post I'm going to share my experience. What I was trying to do is to add one of my spring boot app into another project as a maven dependency. The dependency was resolved but I couldn't access my classes. When no packaging is declared, Maven assumes the default packaging is JAR. The JAR generated by the Spring Boot Maven Plugin overrides the default one generated by Maven. The solution was: The solution that we found is to generate another JAR which will be used as a dependency to be imported from other projects. The full article which helped me solve my issue. Hope it helps someone. A: For reasons I don't understand, in my case, I needed turn on setting "Always update snapshots" in Build, Executions, Deployment > Build Tools > Maven. That made IntelliJ redownload dependencies and make it work. A: In my case the problem was that the project was in maven2 while intellj was configured for maven3. Switching to maven2 in settings solved the problem A: Might be useful to others that were still stuck like me. None of the suggested fix worked. Actually, not before I fixed my main problem which was the installation location of maven. In my case, I did not use the standard location. Changing that location in the maven settings (Settings/Maven/Maven home repository) did the trick. My 2 cents. A: Cache is causing problems! Make sure to do the following: In your terminal, go to project/module: mvn clean install In your IntelliJ: File > Invalidate Caches > Invalidate Right click on project/module > Maven > Reimport A: For my case I should have checked the work offline Go to File>Settings >Build, Execution, Deployment >Build tools>Maven Then check Work Offline A: Worked for me: mvn -U idea:idea Since mvn -U updates the dependencies, check what mvn -U does: https://stackoverflow.com/a/26224957/6150881 Before this I have tried following steps but these have not helped:- Deleted .idea and .iml file Invalidate cache and restart Maven -> Reimport . A: This happened to me when I had mistakenly set my IntelliJ to power saving mode. Power Saving mode is displayed by battery icon with half empty charge. Disabling that fixed the problem. A: This also happened to me after upgrading Intellij to 1.4.15. I tried to re-import the whole project with same result, but enabling Maven Auto Import did the tricks. A: Looks like there are several, valid reasons why intelliJ would ignore a pom file. None of the previous answers worked in my case, so here's what did work, in case someone else runs into this issue: In this example, module3 was being completely ignored by IntelliJ. The pom.xml in that directory wasn't even being treated as a maven pom. My project structure is like this: myProject module1 module2 module3 BUT, my (simplified) pom structure is like this: <project> <modelVersion>4.0.0</modelVersion> <groupId>devs</groupId> <artifactId>myProject</artifactId> <version>0.0-SNAPSHOT</version> <packaging>pom</packaging> <name>myProject</name> <modules> <module>module1</module> <module>module2</module> <modules> <profiles> <profile> <id>CompleteBuildProfile</id> <modules> <module>module3</module> </modules> </profile> </profiles> </project> To fix this, I modified the root <modules> element to add in module3 temporarily. <modules> <module>module1</module> <module>module2</module> <module>module3</module> <modules> Next re-import the root pom, and IntelliJ will add the module. When that's done, revert the pom. IntelliJ will ask if you also want to remove module3 from the project structure. Click No. Bam! Done. Module3 works and I can run my Integration tests from IntelliJ again :D A: The problem was caused for me by selecting the project directory to be Imported when first starting IntelliJ rather than the pom.xml file for the project. Closing the problem project and then following the Import process again but choosing the pom.xml resulted in a fully working project in the IDE. A: For me the problem seems to be a conflict with the maven helper plugin (https://plugins.jetbrains.com/plugin/7179?pr=idea). I disable it, and it works again :) A: Go to File > Settings > Build, Execution, Deployment > Build Tools > Maven and check the Maven home directory. This should be the same maven installation used for command line A: For me, what did the trick was to add the dependencies in File > Project Settings > Modules > Dependencies. A: Just delete your project's .idea folder and re-import your project in IntelliJ. A: If you have any dependencies in pom.xml specific to your organisation than you need to update path of setting.xml for your project which is by default set to your user directory in Ubuntu : /home/user/.m2/settings.xml -> (change it to your apache-maven conf path) Go to -> intellij settings -> build,Execution, Deployement -> Build Tools -> Maven -> User settings file A: Restart, Invalid caches, outside building, none worked for me. Only Reimport worked finally. For others sake, putting it as answer: Right click the project > Maven > Reimport A: While importing a New project : 1.To identify all the modules in a project as maven modules: File --->New Project Settings -->Build Execution deployment -->build tools --> maven ---> importing ---> enable "search for projects recursively" A: Option1: Right-click on the main project folder => Add Framework Support => Check Maven option Option2: right-click on the pom.xml file and click on "Add as a maven project" A: This happened when I was upgrading from Java from 8 to 11 and Spring version. All the dependencies in the maven section disappeared as if no pom file existed. Was able to find the issue by doing mvn clean It showed me that one of the dependencies was missing version tag and it needed one. <dependency> <groupId>com.googlecode.json-simple</groupId> <artifactId>json-simple</artifactId> </dependency> After adding version to the above dependency it started showing up all the dependencies under maven section. A: In my case the my maven home path was pointing to Bundled Maven 3 instead of where my .m2 folder was located, fixed it by going to File > Settings > Build, Execution and Deployment > Maven > Maven home path and adding C:/Program Files/apache-maven-3.5.4 A: I have tried a lot of things and ended up adding the dependencies through the project settings>libraries section. If nothing else works it does the trick.
IntelliJ IDEA not recognizing classes specified in Maven dependencies
I have a project in IntelliJ IDEA which was created with Maven. I then specified a set of dependencies and external repositories in the Pom.xml file. The project builds fine on command line if I do mvn install. When I open any of the code files in the IDE though it says all the classes handled by Maven dependencies aren't recognized - as it would for a normal project if I never added the required JARs to the build path. I know in my Eclipse Maven projects (rather than IntelliJ) it usually shows an extra directory on the left which says "Maven Dependencies" and lists the JARs pulled in via maven. I don't see that here. What am I doing wrong? Here's what my screen looks like:
[ "Right click on the pom.xml -> Add as Maven project -> Reimport\n\n", "For some reason re-import did not do the trick. After looking at this:\nhttp://www.jetbrains.com/idea/webhelp/maven-importing.html\nI set auto-import and then the problem went away though; hopefully it helps someone else. Thanks for the ideas everyone :).\n", "After installing IntelliJ IDEA on a new computer I found myself with the same issue.\nI had to update the remote maven repository. (Settings > Maven > Repositories)\n\nBoth local and remote repos needed to be updated. The remote one wasn't updated ever before this. After a restart everything worked fine. You might have to reimport your project.\n", "You could go to:\nFile > Settings > Build, Execution, Deployment > Build Tools > Maven\nand check if your pom.xml is not in the \"Ignored Files\" list.\n", "I was running into similar issues. For me it ended up being that I was importing the project incorrectly. I had been doing \nimport project\n <navigate to existing project and click top level directory>\n follow the wizard\n\nWhat solved my maven problems was to do\nimport project\n <navigate to existing project and click the pom.xml\n follow the wizard\n\n", "For me File>>Invalidate Cache/Restart>>Invalidate and Restart worked in IntelliJ\n", "Idea cannot download all dependent jar packages using maven,try the following operations:\nmvn -U idea:idea\nthen all the dependent jar packages are download from the maven repository\n", "A simple reimport and/or update of the repositories via Intellij did not do the trick for me.\nInstead I had to delete the complete ~/.m2/repository directory and let maven sort everything out by itself. Afterwards Maven -> Reimport finished it off.\n", "I've encountered a similar issue after refactoring my maven project into different modules. Re-importing on it's own usually doesn't work, but I've found that deleting the .iml files and then re-importing usually does the trick.\n", "Ran into the \"same\" issue some days ago. It might not be related as my issue was more specific to Spring boot but as I was struggling with it and tried every solution on this post I'm going to share my experience.\nWhat I was trying to do is to add one of my spring boot app into another project as a maven dependency. The dependency was resolved but I couldn't access my classes. \n\nWhen no packaging is declared, Maven assumes the default packaging is JAR.\n The JAR generated by the Spring Boot Maven Plugin overrides the default one generated by Maven.\n\nThe solution was:\n\nThe solution that we found is to generate another JAR which will be used as a dependency to be imported from other projects.\n\nThe full article which helped me solve my issue. \nHope it helps someone.\n", "For reasons I don't understand, in my case, I needed turn on setting \"Always update snapshots\" in Build, Executions, Deployment > Build Tools > Maven.\nThat made IntelliJ redownload dependencies and make it work.\n", "In my case the problem was that the project was in maven2 while intellj was configured for maven3. Switching to maven2 in settings solved the problem\n", "Might be useful to others that were still stuck like me. \nNone of the suggested fix worked. Actually, not before I fixed my main problem which was the installation location of maven.\nIn my case, I did not use the standard location. Changing that location in the maven settings (Settings/Maven/Maven home repository) did the trick.\nMy 2 cents.\n", "Cache is causing problems! Make sure to do the following:\nIn your terminal, go to project/module: \nmvn clean install\n\nIn your IntelliJ: \n\nFile > Invalidate Caches > Invalidate\nRight click on project/module > Maven > Reimport\n\n", "For my case I should have checked the work offline\nGo to File>Settings >Build, Execution, Deployment >Build tools>Maven \nThen check Work Offline\n", "Worked for me:\nmvn -U idea:idea\n\nSince mvn -U updates the dependencies, check what mvn -U does: https://stackoverflow.com/a/26224957/6150881\nBefore this I have tried following steps but these have not helped:-\n\nDeleted .idea and .iml file\nInvalidate cache and restart\nMaven -> Reimport .\n\n", "This happened to me when I had mistakenly set my IntelliJ to power saving mode. Power Saving mode is displayed by battery icon with half empty charge. Disabling that fixed the problem.\n", "This also happened to me after upgrading Intellij to 1.4.15. I tried to re-import the whole project with same result, but enabling Maven Auto Import did the tricks.\n", "Looks like there are several, valid reasons why intelliJ would ignore a pom file.\nNone of the previous answers worked in my case, so here's what did work, in case someone else runs into this issue:\nIn this example, module3 was being completely ignored by IntelliJ. The pom.xml in that directory wasn't even being treated as a maven pom.\nMy project structure is like this:\nmyProject\n module1\n module2\n module3\n\nBUT, my (simplified) pom structure is like this:\n<project>\n <modelVersion>4.0.0</modelVersion>\n <groupId>devs</groupId>\n <artifactId>myProject</artifactId>\n <version>0.0-SNAPSHOT</version>\n <packaging>pom</packaging>\n <name>myProject</name>\n\n <modules>\n <module>module1</module>\n <module>module2</module>\n <modules>\n\n <profiles>\n <profile>\n <id>CompleteBuildProfile</id>\n <modules>\n <module>module3</module>\n </modules>\n </profile>\n </profiles>\n</project>\n\nTo fix this, I modified the root <modules> element to add in module3 temporarily.\n <modules>\n <module>module1</module>\n <module>module2</module>\n <module>module3</module>\n <modules>\n\nNext re-import the root pom, and IntelliJ will add the module.\nWhen that's done, revert the pom. IntelliJ will ask if you also want to remove module3 from the project structure. Click No.\nBam! Done. Module3 works and I can run my Integration tests from IntelliJ again :D\n", "The problem was caused for me by selecting the project directory to be Imported when first starting IntelliJ rather than the pom.xml file for the project.\nClosing the problem project and then following the Import process again but choosing the pom.xml resulted in a fully working project in the IDE.\n", "For me the problem seems to be a conflict with the maven helper plugin (https://plugins.jetbrains.com/plugin/7179?pr=idea). \nI disable it, and it works again :)\n", "Go to \nFile > Settings > Build, Execution, Deployment > Build Tools > Maven\nand check the Maven home directory. This should be the same maven installation used for command line\n", "For me, what did the trick was to add the dependencies in File > Project Settings > Modules > Dependencies.\n", "Just delete your project's .idea folder and re-import your project in IntelliJ.\n", "If you have any dependencies in pom.xml specific to your organisation than you need to update path of setting.xml for your project which is by default set to your user directory in Ubuntu : /home/user/.m2/settings.xml -> (change it to your apache-maven conf path) \nGo to -> intellij settings -> build,Execution, Deployement -> Build Tools -> Maven -> User settings file\n\n", "Restart, Invalid caches, outside building, none worked for me.\nOnly Reimport worked finally. For others sake, putting it as answer:\nRight click the project > Maven > Reimport\n\n", "While importing a New project :\n1.To identify all the modules in a project as maven modules:\nFile --->New Project Settings -->Build Execution deployment -->build tools --> maven ---> importing ---> enable \"search for projects recursively\"\n", "Option1: Right-click on the main project folder => Add Framework Support => Check Maven option\nOption2: right-click on the pom.xml file and click on \"Add as a maven project\"\n", "This happened when I was upgrading from Java from 8 to 11 and Spring version. All the dependencies in the maven section disappeared as if no pom file existed. Was able to find the issue by doing\nmvn clean\n\nIt showed me that one of the dependencies was missing version tag and it needed one.\n<dependency>\n <groupId>com.googlecode.json-simple</groupId>\n <artifactId>json-simple</artifactId>\n</dependency>\n\nAfter adding version to the above dependency it started showing up all the dependencies under maven section.\n", "In my case the my maven home path was pointing to Bundled Maven 3 instead of where my .m2 folder was located, fixed it by going to File > Settings > Build, Execution and Deployment > Maven > Maven home path and adding C:/Program Files/apache-maven-3.5.4\n", "I have tried a lot of things and ended up adding the dependencies through the project settings>libraries section. If nothing else works it does the trick.\n" ]
[ 53, 33, 32, 25, 20, 15, 6, 4, 4, 3, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "intellij_idea", "java", "maven" ]
stackoverflow_0015046764_intellij_idea_java_maven.txt
Q: React.cloneElement with deeply nested component it's very complicated to explain the whole use-case here because I have deeply nested component, but I will try to show the concept. How to display age from parent in OneMoreNestedChild without ContextApi, is it possible in the following example ? Codesandbox import React from "react"; import "./styles.css"; const OneMoreNestedChild = ({ age }) => { return ( <> <p>One more nested child</p> Age: {age} </> ); }; const NestedChild = ({ children }) => { return ( <> <p>Nested children</p> {children} </> ); }; const Child = ({ children }) => { return ( <> <p>Child</p> {children} </> ); }; const Parent = ({ children }) => { const newChildren = React.Children.map(children, (child) => React.cloneElement(child, { age: 1 }) ); return <div>{newChildren}</div>; }; export default function App() { return ( <div className="App"> <Parent> <Child> <NestedChild> <OneMoreNestedChild /> </NestedChild> </Child> </Parent> </div> ); } A: import React from "react"; const OneMoreNestedChild = React.forwardRef(({ age }, ref) => { return ( <> <p>One more nested child</p> Age: {age} </> ); }); const NestedChild = ({ children }) => { return ( <> <p>Nested children</p> {children} </> ); }; const Child = ({ children }) => { return ( <> <p>Child</p> {children} </> ); }; const Parent = ({ children }) => { return ( <NestedChild> {React.cloneElement(children, { age: 1 })} </NestedChild> ); }; In this example, the Parent component uses React.cloneElement() to update the age prop of the OneMoreNestedChild component and pass it down to the NestedChild component. However, since the OneMoreNestedChild component has been wrapped in a React.forwardRef() call, the Parent component can pass the age prop directly to the OneMoreNestedChild component, without creating a new component instance or causing intermediate components to re-render.
React.cloneElement with deeply nested component
it's very complicated to explain the whole use-case here because I have deeply nested component, but I will try to show the concept. How to display age from parent in OneMoreNestedChild without ContextApi, is it possible in the following example ? Codesandbox import React from "react"; import "./styles.css"; const OneMoreNestedChild = ({ age }) => { return ( <> <p>One more nested child</p> Age: {age} </> ); }; const NestedChild = ({ children }) => { return ( <> <p>Nested children</p> {children} </> ); }; const Child = ({ children }) => { return ( <> <p>Child</p> {children} </> ); }; const Parent = ({ children }) => { const newChildren = React.Children.map(children, (child) => React.cloneElement(child, { age: 1 }) ); return <div>{newChildren}</div>; }; export default function App() { return ( <div className="App"> <Parent> <Child> <NestedChild> <OneMoreNestedChild /> </NestedChild> </Child> </Parent> </div> ); }
[ "import React from \"react\";\n\nconst OneMoreNestedChild = React.forwardRef(({ age }, ref) => {\n return (\n <>\n <p>One more nested child</p>\n Age: {age}\n </>\n );\n});\n\nconst NestedChild = ({ children }) => {\n return (\n <>\n <p>Nested children</p>\n {children}\n </>\n );\n};\n\nconst Child = ({ children }) => {\n return (\n <>\n <p>Child</p>\n {children}\n </>\n );\n};\n\nconst Parent = ({ children }) => {\n return (\n <NestedChild>\n {React.cloneElement(children, { age: 1 })}\n </NestedChild>\n );\n};\n\nIn this example, the Parent component uses React.cloneElement() to update the age prop of the OneMoreNestedChild component and pass it down to the NestedChild component. However, since the OneMoreNestedChild component has been wrapped in a React.forwardRef() call, the Parent component can pass the age prop directly to the OneMoreNestedChild component, without creating a new component instance or causing intermediate components to re-render.\n" ]
[ 0 ]
[]
[]
[ "clone_element", "reactjs" ]
stackoverflow_0074677560_clone_element_reactjs.txt
Q: typescript typeguard make type of instance never (unexpected behavior) When type checking is performed through type guard, it become never type outside the conditional statement. You can test here class Fruite { num: number; constructor(num: number) { this.num = num; } } class Apple extends Fruite { constructor(num: number) { super(num); } } class Banana extends Fruite { constructor(num: number) { super(num); } } const items = [new Apple(1), new Banana(10)]; const clone = (items: Fruite[]) => { return items.map(item => { if (item instanceof Apple) { return new Apple(item.num); } else if (item instanceof Banana) { return new Banana(item.num); // works well } return new Banana(item.num); // works well }); } // ---- ERROR PART ---- const isApple = (item: Fruite): item is Apple => { return item instanceof Apple; } const isBanana = (item: Fruite): item is Banana => { return item instanceof Banana; } const clone2 = (items: Fruite[]) => { return items.map(item => { if (isApple(item)) { return new Apple(item.num); } else if (isBanana(item)) { return new Banana(item.num); // item is never ! } return new Banana(item.num); // item is never ! }); } Why does the typescript determine the item as never? A: The problem is that Apple and Banana are structurally identical, and since the TypeScript type system is largely structural and not nominal, it considers the two types equivalent. That is, the fact that Apple and Banana have two different declarations does not mean that TypeScript considers them to be different types. So if you write code where types are checked structurally, the compiler will decide that if something is an Apple then it must be a Banana and vice versa. In your clone function, you are directly performing an instanceof type guard check. And instanceof narrowing compares types nominally and not structurally, because that's usually what people want. On the other hand, when you refactored this check into user-defined type guard functions isApple() and isBanana(), such nominal narrowing only happens inside the bodies of these functions. The return type of the function, item is Apple or item is Banana doesn't convey any such special nominal flavor to the callers, so the type checker does the standard structural type check. And so, inside clone2, the compiler thinks it is impossible for item to be a Banana if it is not an Apple. And you get an error. (Yes, these two forms of type checking aren't fully consistent with each other. This is intentional; see microsoft/TypeScript#33481 for more information.) Anyway, generally speaking, you'll have better results if you make sure any two types that are meant to be different should differ structurally, by adding differing members to them. For example: class Apple extends Fruite { readonly type = "Apple"; constructor(num: number) { super(num); } } class Banana extends Fruite { readonly type = "Banana"; constructor(num: number) { super(num); } } Now both classes have a type property of differing string literal type. This clears up the problem with clone2(): const clone2 = (items: Fruite[]) => { return items.map(item => { if (isApple(item)) { return new Apple(item.num); } else if (isBanana(item)) { return new Banana(item.num); // okay } return new Banana(item.num); // okay }); } Again, just about any distinguishing property will work. For classes you can add a private or protected member and as long as they have separate declarations the classes will be considered distinct (that is, private/protected properties are compared nominally and not structurally): class Apple extends Fruite { private prop: undefined; constructor(num: number) { super(num); } } class Banana extends Fruite { private prop: undefined; constructor(num: number) { super(num); } } It's up to you how you want to distinguish your classes. Ideally they would actually have distinct structures naturally, because you're using them for different purposes. Maybe a Banana has a ripeness property and a removeFromBunch() method, while an Apple has a variety property and a keepTheDoctorAway() method. But that depends on your use cases. Playground link to code
typescript typeguard make type of instance never (unexpected behavior)
When type checking is performed through type guard, it become never type outside the conditional statement. You can test here class Fruite { num: number; constructor(num: number) { this.num = num; } } class Apple extends Fruite { constructor(num: number) { super(num); } } class Banana extends Fruite { constructor(num: number) { super(num); } } const items = [new Apple(1), new Banana(10)]; const clone = (items: Fruite[]) => { return items.map(item => { if (item instanceof Apple) { return new Apple(item.num); } else if (item instanceof Banana) { return new Banana(item.num); // works well } return new Banana(item.num); // works well }); } // ---- ERROR PART ---- const isApple = (item: Fruite): item is Apple => { return item instanceof Apple; } const isBanana = (item: Fruite): item is Banana => { return item instanceof Banana; } const clone2 = (items: Fruite[]) => { return items.map(item => { if (isApple(item)) { return new Apple(item.num); } else if (isBanana(item)) { return new Banana(item.num); // item is never ! } return new Banana(item.num); // item is never ! }); } Why does the typescript determine the item as never?
[ "The problem is that Apple and Banana are structurally identical, and since the TypeScript type system is largely structural and not nominal, it considers the two types equivalent. That is, the fact that Apple and Banana have two different declarations does not mean that TypeScript considers them to be different types. So if you write code where types are checked structurally, the compiler will decide that if something is an Apple then it must be a Banana and vice versa.\nIn your clone function, you are directly performing an instanceof type guard check. And instanceof narrowing compares types nominally and not structurally, because that's usually what people want.\nOn the other hand, when you refactored this check into user-defined type guard functions isApple() and isBanana(), such nominal narrowing only happens inside the bodies of these functions. The return type of the function, item is Apple or item is Banana doesn't convey any such special nominal flavor to the callers, so the type checker does the standard structural type check. And so, inside clone2, the compiler thinks it is impossible for item to be a Banana if it is not an Apple. And you get an error.\n(Yes, these two forms of type checking aren't fully consistent with each other. This is intentional; see microsoft/TypeScript#33481 for more information.)\n\nAnyway, generally speaking, you'll have better results if you make sure any two types that are meant to be different should differ structurally, by adding differing members to them. For example:\nclass Apple extends Fruite {\n readonly type = \"Apple\";\n constructor(num: number) {\n super(num);\n }\n}\nclass Banana extends Fruite {\n readonly type = \"Banana\";\n constructor(num: number) {\n super(num);\n }\n}\n\nNow both classes have a type property of differing string literal type. This clears up the problem with clone2():\nconst clone2 = (items: Fruite[]) => {\n return items.map(item => {\n if (isApple(item)) {\n return new Apple(item.num);\n } else if (isBanana(item)) {\n return new Banana(item.num); // okay\n }\n return new Banana(item.num); // okay\n });\n}\n\nAgain, just about any distinguishing property will work. For classes you can add a private or protected member and as long as they have separate declarations the classes will be considered distinct (that is, private/protected properties are compared nominally and not structurally):\nclass Apple extends Fruite {\n private prop: undefined;\n constructor(num: number) {\n super(num);\n }\n}\nclass Banana extends Fruite {\n private prop: undefined;\n constructor(num: number) {\n super(num);\n }\n}\n\nIt's up to you how you want to distinguish your classes. Ideally they would actually have distinct structures naturally, because you're using them for different purposes. Maybe a Banana has a ripeness property and a removeFromBunch() method, while an Apple has a variety property and a keepTheDoctorAway() method. But that depends on your use cases.\nPlayground link to code\n" ]
[ 1 ]
[]
[]
[ "typescript" ]
stackoverflow_0074672993_typescript.txt
Q: AngularJS nested objects in the same bootstrap row i have a JSON object { "products": [ { "devices": [ { "label": "P1D1" }, { "label": "P1D2" } ] }, { "devices": [ { "label": "P2D1" }, { "label": "P2D2" } ] } ] } and i want to have in HTML something like this <div class="row"> <div class="col-3"> P1D1 </div> <div class="col-3"> P1D2 </div> <div class="col-3"> P2D1 </div> <div class="col-3"> P2D2 </div> </div> AngularJS is the language i am using, but i don't seem the find the right syntax <div class="row"> <div ng-repeat="product in obj.products"> <div class="col-3" ng-repeat="device in product.devices"> {{device.label}} </div> </div> </div> How can i achieve all the subitems to be displayed in a bootstrap column next to eachother? A: I think you can't do it directly using only HTML. Alternatively you can use some manipulation in Javascript to flatten your data. JAVASCRIPT $scope.getItems = function(){ return $scope.obj.products.flatMap(function(element){ return element.devices; }) } HTML <div class="col-3" ng-repeat="device in getItems()"> {{device.label}} </div> Check my example: https://codepen.io/avgustint/pen/XWYyoyd?editors=1111
AngularJS nested objects in the same bootstrap row
i have a JSON object { "products": [ { "devices": [ { "label": "P1D1" }, { "label": "P1D2" } ] }, { "devices": [ { "label": "P2D1" }, { "label": "P2D2" } ] } ] } and i want to have in HTML something like this <div class="row"> <div class="col-3"> P1D1 </div> <div class="col-3"> P1D2 </div> <div class="col-3"> P2D1 </div> <div class="col-3"> P2D2 </div> </div> AngularJS is the language i am using, but i don't seem the find the right syntax <div class="row"> <div ng-repeat="product in obj.products"> <div class="col-3" ng-repeat="device in product.devices"> {{device.label}} </div> </div> </div> How can i achieve all the subitems to be displayed in a bootstrap column next to eachother?
[ "I think you can't do it directly using only HTML. Alternatively you can use some manipulation in Javascript to flatten your data.\nJAVASCRIPT\n$scope.getItems = function(){\n return $scope.obj.products.flatMap(function(element){\n return element.devices;\n }) \n}\n\nHTML\n<div class=\"col-3\" ng-repeat=\"device in getItems()\">\n {{device.label}}\n</div>\n\nCheck my example:\nhttps://codepen.io/avgustint/pen/XWYyoyd?editors=1111\n" ]
[ 0 ]
[]
[]
[ "angularjs", "json" ]
stackoverflow_0074674602_angularjs_json.txt
Q: What is the time complexity of my algorithm using a for loop? I have trouble with that exercise, how can I write it in Python, and get time complexity? Should use a while loop? Write an algorithm that returns the smallest value in the array A[1 . . . n]. Use a while loop. What is the time complexity of your algorithm? list1 = [] num = int(input("Enter number of elements in list: ")) for i in range(1, num + 1): ele = int(input("Enter elements: ")) list1.append(ele) print("Smallest element is:", min(list1)) A: How to get the time complexity? Most of the time when someone is asking you for time complexity they aren't asking for exact time complexity, they are asking for an approximate estimate in terms of Big O Notation. I highly recommend you check out this wiki but in short, Big O Notation asks "For 'n' elements how many steps will your algorithm take in the worst case?" So if we look at your algorithm, we can see that there is a list of 'n' elements. Now in the worst case the smallest element is the last element of the list so in this case your algorithm would have to search through all 'n' elements to find the lowest number. In this case your Big O would be O(n) or linear time. As 'n' grows your algorithm time will take a linearly increasing amount of time to execute. Should I use while loop? Technically speaking it doesn't matter too much, but it sounds like they may be asking you to use a while loop so you can get more experience using while loops.
What is the time complexity of my algorithm using a for loop?
I have trouble with that exercise, how can I write it in Python, and get time complexity? Should use a while loop? Write an algorithm that returns the smallest value in the array A[1 . . . n]. Use a while loop. What is the time complexity of your algorithm? list1 = [] num = int(input("Enter number of elements in list: ")) for i in range(1, num + 1): ele = int(input("Enter elements: ")) list1.append(ele) print("Smallest element is:", min(list1))
[ "How to get the time complexity?\nMost of the time when someone is asking you for time complexity they aren't asking for exact time complexity, they are asking for an approximate estimate in terms of Big O Notation. I highly recommend you check out this wiki but in short, Big O Notation asks \"For 'n' elements how many steps will your algorithm take in the worst case?\"\nSo if we look at your algorithm, we can see that there is a list of 'n' elements. Now in the worst case the smallest element is the last element of the list so in this case your algorithm would have to search through all 'n' elements to find the lowest number. In this case your Big O would be O(n) or linear time. As 'n' grows your algorithm time will take a linearly increasing amount of time to execute.\nShould I use while loop?\nTechnically speaking it doesn't matter too much, but it sounds like they may be asking you to use a while loop so you can get more experience using while loops.\n" ]
[ 2 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0074677687_algorithm_python.txt
Q: Can't retrieve JPA data from Postgres Database I have two models related by a OneToMany Relationship. @Entity @Getter @Setter @NoArgsConstructor @Table(name = "GroupChat") public class GroupChat extends RepresentationModel<GroupChat> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id", nullable = false) private Long id; @Column(name = "uniqueId", nullable = false, unique=true) private UUID uniqueId; @Column(name = "expirationDate") private Date expirationDate; @Column(name = "analysedDate") private Date analysedDate; @Column(name = "creationDate") private Date creationDate; @Column(name = "totalParticipants", nullable = false) private int totalParticipants; @Column(name = "totalCurrentParticipants", nullable = false) private int totalCurrentParticipants; @Column(name = "totalMessages", nullable = false) private int totalMessages; @Column(name = "totalSentVideos", nullable = false) private int totalSentVideos; @Column(name = "totalSentPhotos", nullable = false) private int totalSentPhotos; @Column(name = "totalSentGifs", nullable = false) private int totalSentGifs; @Column(name = "totalSentAudioFiles", nullable = false) private int totalSentAudioFiles; @Column(name = "totalSentReactions", nullable = false) private int totalSentReactions; @Column(name = "groupChatName") private String groupChatName; @Column(name = "participants") @OneToMany(mappedBy="groupChatId", fetch = FetchType.LAZY, cascade = CascadeType.ALL) private List<MessengerUser> participants; } @Entity @Getter @Setter @NoArgsConstructor @Table(name = "MessengerUser") public class MessengerUser extends RepresentationModel<MessengerUser> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id", nullable = false) private Long id; @Column(name = "name", nullable = false) private String name; @Column(name = "profilePic") private int profilePic; @Column(name = "numberOfMessagesSent") private int numberOfMessagesSent; @Column(name = "numberOfPhotosSent") private int numberOfPhotosSent; @Column(name = "numberOfVideosSent") private int numberOfVideosSent; @Column(name = "numberOfGifsSent") private int numberOfGifsSent; @Column(name = "numberOfAudioFilesSent") private int numberOfAudioFilesSent; @Column(name = "numberOfReceivedReactions") private int numberOfReceivedReactions; @Column(name = "numberOfSentReactions") private int numberOfSentReactions; @Column(name = "userReactionsReceived", columnDefinition="TEXT") @Lob private String userReactionsReceived; @Column(name = "userReactionsSent", columnDefinition="TEXT") @Lob private String userReactionsSent; @Column(name = "addedToChat", columnDefinition="TEXT") @Lob private String addedToChat; @Column(name = "removedFromChat", columnDefinition="TEXT") @Lob private String removedFromChat; @Column(name = "firstRecordOfActivity") private Date firstRecordOfActivity; @Column(name = "lastRecordOfActivity") private Date lastRecordOfActivity; @Column(name = "userActiveStatus") private Boolean userActiveStatus; @Column(name = "firstMessage") private String firstMessage; @Column(name = "firstMessageDate") private Date firstMessageDate; @Column(name = "message") private int message; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name="group_chat_id", nullable=false) private GroupChat groupChatId; } When I call the controller method to retrieve the group chat data from the postgres DB I receive this error: Hibernate: select g1_0.id,g1_0.analysed_date,g1_0.creation_date,g1_0.expiration_date,g1_0.group_chat_name,g1_0.total_current_participants,g1_0.total_messages,g1_0.total_participants,g1_0.total_sent_audio_files,g1_0.total_sent_gifs,g1_0.total_sent_photos,g1_0.total_sent_reactions,g1_0.total_sent_videos,g1_0.unique_id from group_chat g1_0 where g1_0.unique_id=? 2022-12-04T14:06:41.751Z ERROR 16704 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.springframework.dao.InvalidDataAccessApiUsageException: Argument [3] of type [java.lang.Long] did not match parameter type [com.example.demo.models.GroupChat (n/a)]] with root cause java.lang.IllegalArgumentException: Argument [3] of type [java.lang.Long] did not match parameter type [com.example.demo.models.GroupChat (n/a)] The data is successfully added to the DB as I can see it in the database. I notice that it retrieves the group chat data from the DB successfully but I'm not sure why it doesn't return the object? Controller: @GetMapping("/{uniqueId}/chat") public GroupChat getGroupChatByUniqueId(@PathVariable(value = "uniqueId") UUID uniqueId) { GroupChat g = groupChatRepository.findByUniqueId(uniqueId); Link selfLink = linkTo(methodOn(GroupChatController.class) .getGroupChatByUniqueId(uniqueId)).withSelfRel(); g.add(selfLink); if (messengerUserRepository.findByGroupChatId(g.getId()).size() > 0) { Link ordersLink = linkTo(methodOn(MessengerUserController.class) .getMessengerUsersByGroupChatId(g.getId(),uniqueId)) .withRel("allMessengerUsers"); g.add(ordersLink); } return g; } GroupChat repo: public interface GroupChatRepository extends JpaRepository<GroupChat, Long> { GroupChat findByUniqueId(UUID uniqueId); } A: What type are you using for the column uniqueId in the db ? Provided you are using the UUID type -> https://www.postgresql.org/docs/current/datatype-uuid.html you should take a look at the hibernate-types-52 library at https://mvnrepository.com/artifact/com.vladmihalcea/hibernate-types-52 , include it to your project (if it's not included already) and try to add the @Type(type="pg-uuid") to your column attribute in the entity class. There is also a good article at https://vladmihalcea.com/uuid-identifier-jpa-hibernate/ and also a similar post Postgresql UUID supported by Hibernate? describing the solution to this kind of a problem
Can't retrieve JPA data from Postgres Database
I have two models related by a OneToMany Relationship. @Entity @Getter @Setter @NoArgsConstructor @Table(name = "GroupChat") public class GroupChat extends RepresentationModel<GroupChat> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id", nullable = false) private Long id; @Column(name = "uniqueId", nullable = false, unique=true) private UUID uniqueId; @Column(name = "expirationDate") private Date expirationDate; @Column(name = "analysedDate") private Date analysedDate; @Column(name = "creationDate") private Date creationDate; @Column(name = "totalParticipants", nullable = false) private int totalParticipants; @Column(name = "totalCurrentParticipants", nullable = false) private int totalCurrentParticipants; @Column(name = "totalMessages", nullable = false) private int totalMessages; @Column(name = "totalSentVideos", nullable = false) private int totalSentVideos; @Column(name = "totalSentPhotos", nullable = false) private int totalSentPhotos; @Column(name = "totalSentGifs", nullable = false) private int totalSentGifs; @Column(name = "totalSentAudioFiles", nullable = false) private int totalSentAudioFiles; @Column(name = "totalSentReactions", nullable = false) private int totalSentReactions; @Column(name = "groupChatName") private String groupChatName; @Column(name = "participants") @OneToMany(mappedBy="groupChatId", fetch = FetchType.LAZY, cascade = CascadeType.ALL) private List<MessengerUser> participants; } @Entity @Getter @Setter @NoArgsConstructor @Table(name = "MessengerUser") public class MessengerUser extends RepresentationModel<MessengerUser> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id", nullable = false) private Long id; @Column(name = "name", nullable = false) private String name; @Column(name = "profilePic") private int profilePic; @Column(name = "numberOfMessagesSent") private int numberOfMessagesSent; @Column(name = "numberOfPhotosSent") private int numberOfPhotosSent; @Column(name = "numberOfVideosSent") private int numberOfVideosSent; @Column(name = "numberOfGifsSent") private int numberOfGifsSent; @Column(name = "numberOfAudioFilesSent") private int numberOfAudioFilesSent; @Column(name = "numberOfReceivedReactions") private int numberOfReceivedReactions; @Column(name = "numberOfSentReactions") private int numberOfSentReactions; @Column(name = "userReactionsReceived", columnDefinition="TEXT") @Lob private String userReactionsReceived; @Column(name = "userReactionsSent", columnDefinition="TEXT") @Lob private String userReactionsSent; @Column(name = "addedToChat", columnDefinition="TEXT") @Lob private String addedToChat; @Column(name = "removedFromChat", columnDefinition="TEXT") @Lob private String removedFromChat; @Column(name = "firstRecordOfActivity") private Date firstRecordOfActivity; @Column(name = "lastRecordOfActivity") private Date lastRecordOfActivity; @Column(name = "userActiveStatus") private Boolean userActiveStatus; @Column(name = "firstMessage") private String firstMessage; @Column(name = "firstMessageDate") private Date firstMessageDate; @Column(name = "message") private int message; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name="group_chat_id", nullable=false) private GroupChat groupChatId; } When I call the controller method to retrieve the group chat data from the postgres DB I receive this error: Hibernate: select g1_0.id,g1_0.analysed_date,g1_0.creation_date,g1_0.expiration_date,g1_0.group_chat_name,g1_0.total_current_participants,g1_0.total_messages,g1_0.total_participants,g1_0.total_sent_audio_files,g1_0.total_sent_gifs,g1_0.total_sent_photos,g1_0.total_sent_reactions,g1_0.total_sent_videos,g1_0.unique_id from group_chat g1_0 where g1_0.unique_id=? 2022-12-04T14:06:41.751Z ERROR 16704 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.springframework.dao.InvalidDataAccessApiUsageException: Argument [3] of type [java.lang.Long] did not match parameter type [com.example.demo.models.GroupChat (n/a)]] with root cause java.lang.IllegalArgumentException: Argument [3] of type [java.lang.Long] did not match parameter type [com.example.demo.models.GroupChat (n/a)] The data is successfully added to the DB as I can see it in the database. I notice that it retrieves the group chat data from the DB successfully but I'm not sure why it doesn't return the object? Controller: @GetMapping("/{uniqueId}/chat") public GroupChat getGroupChatByUniqueId(@PathVariable(value = "uniqueId") UUID uniqueId) { GroupChat g = groupChatRepository.findByUniqueId(uniqueId); Link selfLink = linkTo(methodOn(GroupChatController.class) .getGroupChatByUniqueId(uniqueId)).withSelfRel(); g.add(selfLink); if (messengerUserRepository.findByGroupChatId(g.getId()).size() > 0) { Link ordersLink = linkTo(methodOn(MessengerUserController.class) .getMessengerUsersByGroupChatId(g.getId(),uniqueId)) .withRel("allMessengerUsers"); g.add(ordersLink); } return g; } GroupChat repo: public interface GroupChatRepository extends JpaRepository<GroupChat, Long> { GroupChat findByUniqueId(UUID uniqueId); }
[ "What type are you using for the column uniqueId in the db ?\nProvided you are using the UUID type -> https://www.postgresql.org/docs/current/datatype-uuid.html\nyou should take a look at the hibernate-types-52 library at https://mvnrepository.com/artifact/com.vladmihalcea/hibernate-types-52 , include it to your project (if it's not included already) and try to add the @Type(type=\"pg-uuid\") to your column attribute in the entity class.\nThere is also a good article at https://vladmihalcea.com/uuid-identifier-jpa-hibernate/ and also a similar post Postgresql UUID supported by Hibernate? describing the solution to this kind of a problem\n" ]
[ 0 ]
[]
[]
[ "postgresql", "spring", "spring_boot", "spring_data_jpa", "spring_hateoas" ]
stackoverflow_0074677444_postgresql_spring_spring_boot_spring_data_jpa_spring_hateoas.txt
Q: setting field value with a setter with field Name & value as parameter I faced a problem while try building an application. the problem is that trying to set the field in SomeClass with a general setField function. my implementation was like this but faced an issue withthis[fieldName]; EDITED class TestClass { String name; // <- to set this the memberName = 'name'; int age; // <- to set this the memberName = 'age'; // and both will use the same setField as setter. TestClass({required name, required age}); // the prev code is correct and no problem with it. /** the use will be like this to set the value of name **/ /** test.setField(memberName : 'name', valueOfThatMemberName: 'test name'); // notice here **/ /** the use will be like this to set the value of age **/ /** test.setField(memberName : 'age', valueOfThatMemberName: 15); // notice here **/ void setField({required String memberName, required var valueOfThatMemberName}) { // some extra validation and logic,.. this[memberName] = valueOfThatMemberName; // this gives this error: /** Error: The operator '[]=' isn't defined for the class 'TestClass'. **/ } // this will return the valueOfThePassedMemberName; getField({required String memberName}) { return this[memberName]; // <= this gives this error /** Error: The getter 'memberName' isn't defined for the class 'TestClass'. **/ } } void main() { TestClass test = TestClass(name: 'alaa', age: 14); /** here is the way to use it. **/ test.setField(memberName: 'name', valueOfThePassedMemberName: 'test name'); // notice here test.setField(memberName: 'age', valueOfThePassedMemberName: 16); // notice here print(test.getField(memberName: 'name')); // <- this should print the name of test object. } setting the values just through the setField method. ADDING RUNABLE JS CODE // i need to do the exact same thing here with the dart. export class Entity { constructor(data: {}) { Object.keys(data).forEach(key => { this.set(key, data[key], true); }); } get(field: string) { return this["_" + field]; } set(field: string, value: any, initial = false) { this["_" + field] = value; } } A: class TestClass { late String fieldName; late dynamic value; TestClass({required fieldName, required value}); void setField({required String fieldName, required var value}) { // some extra validation and logic,.. this.fieldName = fieldName; this.value = value; } getField() { return fieldName; } getValue() { return value; } } void main() { TestClass test = TestClass(fieldName: 'name', value: 'Alaa'); test.setField(fieldName: 'name', value: 'Alaa'); print('${test.getField()}: ${test.getValue()} '); test.setField(fieldName: 'age', value: 14); print('${test.getField()}: ${test.getValue()} '); }
setting field value with a setter with field Name & value as parameter
I faced a problem while try building an application. the problem is that trying to set the field in SomeClass with a general setField function. my implementation was like this but faced an issue withthis[fieldName]; EDITED class TestClass { String name; // <- to set this the memberName = 'name'; int age; // <- to set this the memberName = 'age'; // and both will use the same setField as setter. TestClass({required name, required age}); // the prev code is correct and no problem with it. /** the use will be like this to set the value of name **/ /** test.setField(memberName : 'name', valueOfThatMemberName: 'test name'); // notice here **/ /** the use will be like this to set the value of age **/ /** test.setField(memberName : 'age', valueOfThatMemberName: 15); // notice here **/ void setField({required String memberName, required var valueOfThatMemberName}) { // some extra validation and logic,.. this[memberName] = valueOfThatMemberName; // this gives this error: /** Error: The operator '[]=' isn't defined for the class 'TestClass'. **/ } // this will return the valueOfThePassedMemberName; getField({required String memberName}) { return this[memberName]; // <= this gives this error /** Error: The getter 'memberName' isn't defined for the class 'TestClass'. **/ } } void main() { TestClass test = TestClass(name: 'alaa', age: 14); /** here is the way to use it. **/ test.setField(memberName: 'name', valueOfThePassedMemberName: 'test name'); // notice here test.setField(memberName: 'age', valueOfThePassedMemberName: 16); // notice here print(test.getField(memberName: 'name')); // <- this should print the name of test object. } setting the values just through the setField method. ADDING RUNABLE JS CODE // i need to do the exact same thing here with the dart. export class Entity { constructor(data: {}) { Object.keys(data).forEach(key => { this.set(key, data[key], true); }); } get(field: string) { return this["_" + field]; } set(field: string, value: any, initial = false) { this["_" + field] = value; } }
[ "class TestClass {\n late String fieldName;\n late dynamic value;\n\n TestClass({required fieldName, required value});\n\n void setField({required String fieldName, required var value}) {\n // some extra validation and logic,..\n this.fieldName = fieldName;\n this.value = value;\n }\n\n getField() {\n return fieldName;\n }\n \n getValue() {\n return value;\n }\n}\n\n\nvoid main() {\n TestClass test = TestClass(fieldName: 'name', value: 'Alaa');\n \n \n test.setField(fieldName: 'name', value: 'Alaa');\n print('${test.getField()}: ${test.getValue()} ');\n \n test.setField(fieldName: 'age', value: 14);\n print('${test.getField()}: ${test.getValue()} ');\n \n}\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074675982_dart_flutter.txt
Q: Is there any way to create and stream a big file in browser with JavaScript When I create a big file in browser with JavaScript, it make browser out of memory and crash. Is there any way to create and stream a big file in browser (such as Chrome) with JavaScript? A: Yes, it is possible to create and stream a big file in the browser using JavaScript. One way to do this is to use a streaming library such as "StreamSaver.js" or "FileSaver.js" to handle the creation and streaming of the file in a way that does not overload the browser's memory. This allows the file to be written to the user's local file system in small chunks, preventing the browser from crashing due to excessive memory usage.
Is there any way to create and stream a big file in browser with JavaScript
When I create a big file in browser with JavaScript, it make browser out of memory and crash. Is there any way to create and stream a big file in browser (such as Chrome) with JavaScript?
[ "Yes, it is possible to create and stream a big file in the browser using JavaScript. One way to do this is to use a streaming library such as \"StreamSaver.js\" or \"FileSaver.js\" to handle the creation and streaming of the file in a way that does not overload the browser's memory. This allows the file to be written to the user's local file system in small chunks, preventing the browser from crashing due to excessive memory usage.\n" ]
[ 2 ]
[]
[]
[ "javascript" ]
stackoverflow_0074677732_javascript.txt
Q: I try to add a city with a country name, database add new country with a new country id I got some error but I couldnt fix it. I want to add a new city with a country name but country id (if exists) will not change in database(it wont add with a new id). Here is my City class; [Table("Cities")] public class City { [Key] public int Id { get; set; } [Required] [StringLength(30)] public string Name { get; set; } [ForeignKey("Countries")] public int CountryId { get; set; } public Country Country { get; set; } } And here is my Country class; [Table("Countries")] public class Country { [Key] public int Id { get; set; } [Required] [StringLength(30)] public string Name { get; set; } public ICollection<City> Cities { get; set; } } I used CQRS and Mediator design patterns. This is my Create Command; public class CreateCityCommand : IRequest<City> { public City City { get; set; } } This is my Create Command Handler; public class CreateCityCommandHandler : IRequestHandler<CreateCityCommand, City> { private readonly EssoContext _context; public CreateCityCommandHandler(EssoContext context) { _context = context; } public async Task<City> Handle(CreateCityCommand request, CancellationToken cancellationToken) { var check = await _context.City.FirstOrDefaultAsync(c => c.Name == request.City.Name); if (check != null) { throw new InvalidOperationException("You cannot add the same value!!!"); } _context.City.Add(request.City); await _context.SaveChangesAsync(); return request.City; } } And here is my controller; [HttpPost("Create")] public async Task<ActionResult<City>> PostCity([FromBody] City city) { try { var command = new CreateCityCommand() { City = city }; var result = await _mediator.Send(command); return Ok(result); } catch(Exception ex) { return StatusCode((int)HttpStatusCode.InternalServerError, ex.Message); } } This is swagger output (seeing countries) Every city has a country, and each country may have many cities. That's why i used CountryId and property (as FK) in City class. In country class, i defined collection(list) of cities. But I dont know what is the problem. Thanks for help... A: As per my understanding you want to get related city data with country public class GetCountriesCommand: IRequest<IEnumerable<Country>>{ } public class GetCountriesCommandHandler : IRequestHandler<GetCountriesCommand, IEnumerable<Country>> { private readonly EssoContext _context; public GetCountriesCommandHandler(EssoContext context) { _context = context; } public async Task<IEnumerable<Country>> Handle(GetCountriesCommand command, CancellationToken cancellationToken) { // .Includes helps to return the related cities for all countries return await _context.Country.Include(x=>x.City).ToListAsync(); } } In country controller [HttpGet("Get-Countries")] public async Task<IActionResult<IEnumerable<Country>>> Get() { try { var command = new GetCountriesCommand(); var result = await _mediator.Send(command); return Ok(result); } catch(Exception ex) { return StatusCode((int)HttpStatusCode.InternalServerError, ex.Message); } } A: I guess I figured it out. I just declared CountryId as a foreign key. That's why each CountryId has many cities. I did city class like that; [Table("Cities")] public class City { [Key] public int Id { get; set; } [Required] [StringLength(30)] public string Name { get; set; } public int CountryId { get; set; } } Did I correct?
I try to add a city with a country name, database add new country with a new country id
I got some error but I couldnt fix it. I want to add a new city with a country name but country id (if exists) will not change in database(it wont add with a new id). Here is my City class; [Table("Cities")] public class City { [Key] public int Id { get; set; } [Required] [StringLength(30)] public string Name { get; set; } [ForeignKey("Countries")] public int CountryId { get; set; } public Country Country { get; set; } } And here is my Country class; [Table("Countries")] public class Country { [Key] public int Id { get; set; } [Required] [StringLength(30)] public string Name { get; set; } public ICollection<City> Cities { get; set; } } I used CQRS and Mediator design patterns. This is my Create Command; public class CreateCityCommand : IRequest<City> { public City City { get; set; } } This is my Create Command Handler; public class CreateCityCommandHandler : IRequestHandler<CreateCityCommand, City> { private readonly EssoContext _context; public CreateCityCommandHandler(EssoContext context) { _context = context; } public async Task<City> Handle(CreateCityCommand request, CancellationToken cancellationToken) { var check = await _context.City.FirstOrDefaultAsync(c => c.Name == request.City.Name); if (check != null) { throw new InvalidOperationException("You cannot add the same value!!!"); } _context.City.Add(request.City); await _context.SaveChangesAsync(); return request.City; } } And here is my controller; [HttpPost("Create")] public async Task<ActionResult<City>> PostCity([FromBody] City city) { try { var command = new CreateCityCommand() { City = city }; var result = await _mediator.Send(command); return Ok(result); } catch(Exception ex) { return StatusCode((int)HttpStatusCode.InternalServerError, ex.Message); } } This is swagger output (seeing countries) Every city has a country, and each country may have many cities. That's why i used CountryId and property (as FK) in City class. In country class, i defined collection(list) of cities. But I dont know what is the problem. Thanks for help...
[ "As per my understanding you want to get related city data with country\npublic class GetCountriesCommand: IRequest<IEnumerable<Country>>{\n\n}\npublic class GetCountriesCommandHandler : IRequestHandler<GetCountriesCommand, IEnumerable<Country>>\n {\n private readonly EssoContext _context;\n\n public GetCountriesCommandHandler(EssoContext context)\n {\n _context = context;\n }\n\n public async Task<IEnumerable<Country>> Handle(GetCountriesCommand command, CancellationToken cancellationToken)\n {\n // .Includes helps to return the related cities for all countries\n return await _context.Country.Include(x=>x.City).ToListAsync();\n }\n }\n\nIn country controller\n[HttpGet(\"Get-Countries\")]\n public async Task<IActionResult<IEnumerable<Country>>> Get()\n {\n try\n {\n var command = new GetCountriesCommand();\n var result = await _mediator.Send(command);\n return Ok(result);\n }\n catch(Exception ex)\n {\n return StatusCode((int)HttpStatusCode.InternalServerError, ex.Message);\n }\n \n }\n\n", "I guess I figured it out. I just declared CountryId as a foreign key. That's why each CountryId has many cities.\nI did city class like that;\n [Table(\"Cities\")]\n public class City\n {\n [Key]\n public int Id { get; set; }\n\n [Required]\n [StringLength(30)]\n public string Name { get; set; }\n\n public int CountryId { get; set; }\n \n }\n\nDid I correct?\n" ]
[ 0, 0 ]
[]
[]
[ ".net_6.0", "c#", "database_migration", "ef_code_first", "entity_framework_core" ]
stackoverflow_0074675349_.net_6.0_c#_database_migration_ef_code_first_entity_framework_core.txt
Q: Further optimizing the ISING model I've implemented the 2D ISING model in Python, using NumPy and Numba's JIT: from timeit import default_timer as timer import matplotlib.pyplot as plt import numba as nb import numpy as np # TODO for Dict optimization. # from numba import types # from numba.typed import Dict @nb.njit(nogil=True) def initialstate(N): ''' Generates a random spin configuration for initial condition ''' state = np.empty((N,N),dtype=np.int8) for i in range(N): for j in range(N): state[i,j] = 2*np.random.randint(2)-1 return state @nb.njit(nogil=True) def mcmove(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm ''' # # TODO* Dict optimization # dict_param = Dict.empty( # key_type=types.int64, # value_type=types.float64, # ) # dict_param = {cost : np.exp(-cost*beta) for cost in [-8, -4, 0, 4, 8] } for _ in range(N): for __ in range(N): a = np.random.randint(0, N) b = np.random.randint(0, N) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE if cost < 0: s *= -1 #TODO* elif np.random.rand() < dict_param[cost]: elif np.random.rand() < np.exp(-cost*beta): s *= -1 lattice[a, b] = s return lattice @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 for i in range(len(lattice)): for j in range(len(lattice)): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy += -nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag @nb.njit(nogil=True) def ISING_model(nT, N, burnin, mcSteps): """ nT : Number of temperature points. N : Size of the lattice, N x N. burnin : Number of MC sweeps for equilibration (Burn-in). mcSteps : Number of MC sweeps for calculation. """ T = np.linspace(1.2, 3.8, nT); E,M,C,X = np.zeros(nT), np.zeros(nT), np.zeros(nT), np.zeros(nT) n1, n2 = 1.0/(mcSteps*N*N), 1.0/(mcSteps*mcSteps*N*N) for temperature in range(nT): lattice = initialstate(N) # initialise E1 = M1 = E2 = M2 = 0 iT = 1/T[temperature] iT2= iT*iT for _ in range(burnin): # equilibrate mcmove(lattice, iT, N) # Monte Carlo moves for _ in range(mcSteps): mcmove(lattice, iT, N) Ene = calcEnergy(lattice, N) # calculate the Energy Mag = calcMag(lattice,) # calculate the Magnetisation E1 += Ene M1 += Mag M2 += Mag*Mag E2 += Ene*Ene E[temperature] = n1*E1 M[temperature] = n1*M1 C[temperature] = (n1*E2 - n2*E1*E1)*iT2 X[temperature] = (n1*M2 - n2*M1*M1)*iT return T,E,M,C,X def main(): N = 32 start_time = timer() T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) end_time = timer() print("Elapsed time: %g seconds" % (end_time - start_time)) f = plt.figure(figsize=(18, 10)); # # figure title f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20) _ = f.add_subplot(2, 2, 1 ) plt.plot(T, E, '-o', color='Blue') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Energy ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 2 ) plt.plot(T, abs(M), '-o', color='Red') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Magnetization ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 3 ) plt.plot(T, C, '-o', color='Green') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Specific Heat ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 4 ) plt.plot(T, X, '-o', color='Black') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Susceptibility", fontsize=20) plt.axis('tight') plt.show() if __name__ == '__main__': main() Which of course, works: I have two main questions: Is there anything left to optimize? I knew ISING model is hard to simulate, but looking at the following table, it seems like I'm missing something... lattice size : 32x32 burnin = 8 * 10**4 mcSteps = 8 * 10**4 Simulation time = 365.98 seconds lattice size : 64x64 burnin = 10**5 mcSteps = 10**5 Simulation time = 1869.58 seconds I tried implementing another optimization based on not calculating the exponential over and over again using a dictionary, yet on my tests, it seems like its slower. What am I doing wrong? A: The computation of the exponential is not really an issue. The main issue is that generating random numbers is expensive and a huge number of random values are generated. Another issue is that the current computation is intrinsically sequential. Indeed, for N=32, mcmove tends to generate about 3000 random values, and this function is called 2 * 80_000 times per iteration. This means, 2 * 80_000 * 3000 = 480_000_000 random number generated per iteration. Assuming generating a random number takes about 5 nanoseconds (ie. only 20 cycles on a 4 GHz CPU), then each iteration will take about 2.5 seconds only to generate all the random numbers. On my 4.5 GHz i5-9600KF CPU, each iteration takes about 2.5-3.0 seconds. The first thing to do is to try to generate random number using a faster method. The bad news is that this is hard to do in Numba and more generally any-Python-based code. Micro-optimizations using a lower-level language like C or C++ can significantly help to speed up this computation. Such low-level micro-optimizations are not possible in high-level languages/tools like Python, including Numba. Still, one can implement a random-number generator (RNG) specifically designed so to produce the random values you need. xoshiro256** can be used to generate random numbers quickly though it may not be as random as what Numpy/Numba can produce (there is no free lunch). The idea is to generate 64-bit integers and extract range of bits so to produce 2 16-bit integers and a 32-bit floating point value. This RNG should be able to generate 3 values in only about 10 cycles on a modern CPU! Once this optimization has been applied, the computation of the exponential becomes the new bottleneck. It can be improved using a lookup table (LUT) like you did. However, using a dictionary is slow. You can use a basic array for that. This is much faster. Note the index need to be positive and small. Thus, the minimum cost needs to be added. Once the previous optimization has been implemented, the new bottleneck is the conditionals if cost < 0 and elif c < .... The conditionals are slow because they are unpredictable (due to the result being random). Indeed, modern CPUs try to predict the outcomes of conditionals so to avoid expensive stalls in the CPU pipeline. This is a complex topic. If you want to know more about this, then please read this great post. In practice, such a problem can be avoided using a branchless computation. This means you need to use binary operators and integer sticks so for the sign of s to change regarding the value of the condition. For example: s *= 1 - ((cost < 0) | (c < lut[cost])) * 2. Note that modulus are generally expensive unless the compiler know the value at compile time. They are even faster when the value is a power of two because the compiler can use bit tricks so to compute the modulus (more specifically a logical and by a pre-compiled constant). For calcEnergy, a solution is to compute the border separately so to completely avoid the modulus. Furthermore, loops can be faster when the compiler know the number of iterations at compile time (it can unroll the loops and better vectorize them). Moreover, when N is not a power of two, the RNG can be significantly slower and more complex to implement without any bias, so I assume N is a power of two. Here is the final code: # [...] Same as in the initial code @nb.njit(inline="always") def rol64(x, k): return (x << k) | (x >> (64 - k)) @nb.njit(inline="always") def xoshiro256ss_init(): state = np.empty(4, dtype=np.uint64) maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1) for i in range(4): state[i] = np.random.randint(0, maxi) return state @nb.njit(inline="always") def xoshiro256ss(state): result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9) t = state[1] << np.uint64(17) state[2] ^= state[0] state[3] ^= state[1] state[1] ^= state[2] state[0] ^= state[3] state[2] ^= t state[3] = rol64(state[3], np.uint64(45)) return result @nb.njit(inline="always") def xoshiro_gen_values(N, state): ''' Produce 2 integers between 0 and N and a simple-precision floating-point number. N must be a power of two less than 65536. Otherwise results will be biased (ie. not random). N should be known at compile time so for this to be fast ''' rand_bits = xoshiro256ss(state) a = (rand_bits >> np.uint64(32)) % N b = (rand_bits >> np.uint64(48)) % N c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10) return (a, b, c) @nb.njit(nogil=True) def mcmove_generic(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm. N must be a small power of two and known at compile time ''' state = xoshiro256ss_init() lut = np.full(16, np.nan) for cost in (0, 4, 8, 12, 16): lut[cost] = np.exp(-cost*beta) for _ in range(N): for __ in range(N): a, b, c = xoshiro_gen_values(N, state) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE # Branchless computation of s tmp = (cost < 0) | (c < lut[cost]) s *= 1 - tmp * 2 lattice[a, b] = s return lattice @nb.njit(nogil=True) def mcmove(lattice, beta, N): assert N in [16, 32, 64, 128] if N == 16: return mcmove_generic(lattice, beta, 16) elif N == 32: return mcmove_generic(lattice, beta, 32) elif N == 64: return mcmove_generic(lattice, beta, 64) elif N == 128: return mcmove_generic(lattice, beta, 128) else: raise Exception('Not implemented') @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 # Center for i in range(1, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1] energy -= nb*S # Border for i in (0, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S for i in range(1, len(lattice)-1): for j in (0, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag # [...] Same as in the initial code I hope there is no error in the code. It is hard to check results with a different RNG. The resulting code is significantly faster on my machine: it compute 4 iterations in 5.3 seconds with N=32 as opposed to 24.1 seconds. The computation is thus 4.5 times faster! It is very hard to optimize the code further using Numba in Python. The computation cannot be efficiently parallelized due to the long dependency chain in mcmove. A: Based on the Mr. Richard's excellent answer, I found another optimization. In the ISING_model function, the code can be parallelized because we are doing the same operations independently for every temperature. To achieve this, I simply used parallel = True in the ISING_model nb.jit decorator, and used nb.prange for the temperature loop inside the function, i.e, for temperature in nb.prange(nT). The resulting code is even faster... On my machine, with the setting of ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) with N=32, without parallelization, it computes in 93.1621 seconds and with parallelization, it computes in 29.9872 seconds. Another 3 times faster optimization! Which is really cool. I put the final code here for everyone to use. from timeit import default_timer as timer import matplotlib.pyplot as plt import numba as nb import numpy as np @nb.njit(nogil=True) def initialstate(N): ''' Generates a random spin configuration for initial condition in compliance with the Numba JIT compiler. ''' state = np.empty((N,N),dtype=np.int8) for i in range(N): for j in range(N): state[i,j] = 2*np.random.randint(2)-1 return state @nb.njit(inline="always") def rol64(x, k): return (x << k) | (x >> (64 - k)) @nb.njit(inline="always") def xoshiro256ss_init(): state = np.empty(4, dtype=np.uint64) maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1) for i in range(4): state[i] = np.random.randint(0, maxi) return state @nb.njit(inline="always") def xoshiro256ss(state): result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9) t = state[1] << np.uint64(17) state[2] ^= state[0] state[3] ^= state[1] state[1] ^= state[2] state[0] ^= state[3] state[2] ^= t state[3] = rol64(state[3], np.uint64(45)) return result @nb.njit(inline="always") def xoshiro_gen_values(N, state): ''' Produce 2 integers between 0 and N and a simple-precision floating-point number. N must be a power of two less than 65536. Otherwise results will be biased (ie. not random). N should be known at compile time so for this to be fast ''' rand_bits = xoshiro256ss(state) a = (rand_bits >> np.uint64(32)) % N b = (rand_bits >> np.uint64(48)) % N c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10) return (a, b, c) @nb.njit(nogil=True) def mcmove_generic(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm. N must be a small power of two and known at compile time ''' state = xoshiro256ss_init() lut = np.full(16, np.nan) for cost in (0, 4, 8, 12, 16): lut[cost] = np.exp(-cost*beta) for _ in range(N): for __ in range(N): a, b, c = xoshiro_gen_values(N, state) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE # Branchless computation of s tmp = (cost < 0) | (c < lut[cost]) s *= 1 - tmp * 2 lattice[a, b] = s return lattice @nb.njit(nogil=True) def mcmove(lattice, beta, N): assert N in [16, 32, 64, 128] if N == 16: return mcmove_generic(lattice, beta, 16) elif N == 32: return mcmove_generic(lattice, beta, 32) elif N == 64: return mcmove_generic(lattice, beta, 64) elif N == 128: return mcmove_generic(lattice, beta, 128) else: raise Exception('Not implemented') @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 # Center for i in range(1, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1] energy -= nb*S # Border for i in (0, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S for i in range(1, len(lattice)-1): for j in (0, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag @nb.njit(nogil=True, parallel=True) def ISING_model(nT, N, burnin, mcSteps): """ nT : Number of temperature points. N : Size of the lattice, N x N. burnin : Number of MC sweeps for equilibration (Burn-in). mcSteps : Number of MC sweeps for calculation. """ T = np.linspace(1.2, 3.8, nT) E,M,C,X = np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32) n1, n2 = 1/(mcSteps*N*N), 1/(mcSteps*mcSteps*N*N) for temperature in nb.prange(nT): lattice = initialstate(N) # initialise E1 = M1 = E2 = M2 = 0 iT = 1/T[temperature] iT2= iT*iT for _ in range(burnin): # equilibrate mcmove(lattice, iT, N) # Monte Carlo moves for _ in range(mcSteps): mcmove(lattice, iT, N) Ene = calcEnergy(lattice, N) # calculate the Energy Mag = calcMag(lattice) # calculate the Magnetisation E1 += Ene M1 += Mag M2 += Mag*Mag E2 += Ene*Ene E[temperature] = n1*E1 M[temperature] = n1*M1 C[temperature] = (n1*E2 - n2*E1*E1)*iT2 X[temperature] = (n1*M2 - n2*M1*M1)*iT return T,E,M,C,X def main(): N = 32 start_time = timer() T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) end_time = timer() print("Elapsed time: %g seconds" % (end_time - start_time)) f = plt.figure(figsize=(18, 10)); # # figure title f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20) _ = f.add_subplot(2, 2, 1 ) plt.plot(T, E, '-o', color='Blue') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Energy ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 2 ) plt.plot(T, abs(M), '-o', color='Red') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Magnetization ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 3 ) plt.plot(T, C, '-o', color='Green') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Specific Heat ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 4 ) plt.plot(T, X, '-o', color='Black') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Susceptibility", fontsize=20) plt.axis('tight') plt.show() if __name__ == '__main__': main()
Further optimizing the ISING model
I've implemented the 2D ISING model in Python, using NumPy and Numba's JIT: from timeit import default_timer as timer import matplotlib.pyplot as plt import numba as nb import numpy as np # TODO for Dict optimization. # from numba import types # from numba.typed import Dict @nb.njit(nogil=True) def initialstate(N): ''' Generates a random spin configuration for initial condition ''' state = np.empty((N,N),dtype=np.int8) for i in range(N): for j in range(N): state[i,j] = 2*np.random.randint(2)-1 return state @nb.njit(nogil=True) def mcmove(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm ''' # # TODO* Dict optimization # dict_param = Dict.empty( # key_type=types.int64, # value_type=types.float64, # ) # dict_param = {cost : np.exp(-cost*beta) for cost in [-8, -4, 0, 4, 8] } for _ in range(N): for __ in range(N): a = np.random.randint(0, N) b = np.random.randint(0, N) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE if cost < 0: s *= -1 #TODO* elif np.random.rand() < dict_param[cost]: elif np.random.rand() < np.exp(-cost*beta): s *= -1 lattice[a, b] = s return lattice @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 for i in range(len(lattice)): for j in range(len(lattice)): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy += -nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag @nb.njit(nogil=True) def ISING_model(nT, N, burnin, mcSteps): """ nT : Number of temperature points. N : Size of the lattice, N x N. burnin : Number of MC sweeps for equilibration (Burn-in). mcSteps : Number of MC sweeps for calculation. """ T = np.linspace(1.2, 3.8, nT); E,M,C,X = np.zeros(nT), np.zeros(nT), np.zeros(nT), np.zeros(nT) n1, n2 = 1.0/(mcSteps*N*N), 1.0/(mcSteps*mcSteps*N*N) for temperature in range(nT): lattice = initialstate(N) # initialise E1 = M1 = E2 = M2 = 0 iT = 1/T[temperature] iT2= iT*iT for _ in range(burnin): # equilibrate mcmove(lattice, iT, N) # Monte Carlo moves for _ in range(mcSteps): mcmove(lattice, iT, N) Ene = calcEnergy(lattice, N) # calculate the Energy Mag = calcMag(lattice,) # calculate the Magnetisation E1 += Ene M1 += Mag M2 += Mag*Mag E2 += Ene*Ene E[temperature] = n1*E1 M[temperature] = n1*M1 C[temperature] = (n1*E2 - n2*E1*E1)*iT2 X[temperature] = (n1*M2 - n2*M1*M1)*iT return T,E,M,C,X def main(): N = 32 start_time = timer() T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) end_time = timer() print("Elapsed time: %g seconds" % (end_time - start_time)) f = plt.figure(figsize=(18, 10)); # # figure title f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20) _ = f.add_subplot(2, 2, 1 ) plt.plot(T, E, '-o', color='Blue') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Energy ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 2 ) plt.plot(T, abs(M), '-o', color='Red') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Magnetization ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 3 ) plt.plot(T, C, '-o', color='Green') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Specific Heat ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 4 ) plt.plot(T, X, '-o', color='Black') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Susceptibility", fontsize=20) plt.axis('tight') plt.show() if __name__ == '__main__': main() Which of course, works: I have two main questions: Is there anything left to optimize? I knew ISING model is hard to simulate, but looking at the following table, it seems like I'm missing something... lattice size : 32x32 burnin = 8 * 10**4 mcSteps = 8 * 10**4 Simulation time = 365.98 seconds lattice size : 64x64 burnin = 10**5 mcSteps = 10**5 Simulation time = 1869.58 seconds I tried implementing another optimization based on not calculating the exponential over and over again using a dictionary, yet on my tests, it seems like its slower. What am I doing wrong?
[ "The computation of the exponential is not really an issue. The main issue is that generating random numbers is expensive and a huge number of random values are generated. Another issue is that the current computation is intrinsically sequential.\nIndeed, for N=32, mcmove tends to generate about 3000 random values, and this function is called 2 * 80_000 times per iteration. This means, 2 * 80_000 * 3000 = 480_000_000 random number generated per iteration. Assuming generating a random number takes about 5 nanoseconds (ie. only 20 cycles on a 4 GHz CPU), then each iteration will take about 2.5 seconds only to generate all the random numbers. On my 4.5 GHz i5-9600KF CPU, each iteration takes about 2.5-3.0 seconds.\nThe first thing to do is to try to generate random number using a faster method. The bad news is that this is hard to do in Numba and more generally any-Python-based code. Micro-optimizations using a lower-level language like C or C++ can significantly help to speed up this computation. Such low-level micro-optimizations are not possible in high-level languages/tools like Python, including Numba. Still, one can implement a random-number generator (RNG) specifically designed so to produce the random values you need. xoshiro256** can be used to generate random numbers quickly though it may not be as random as what Numpy/Numba can produce (there is no free lunch). The idea is to generate 64-bit integers and extract range of bits so to produce 2 16-bit integers and a 32-bit floating point value. This RNG should be able to generate 3 values in only about 10 cycles on a modern CPU!\nOnce this optimization has been applied, the computation of the exponential becomes the new bottleneck. It can be improved using a lookup table (LUT) like you did. However, using a dictionary is slow. You can use a basic array for that. This is much faster. Note the index need to be positive and small. Thus, the minimum cost needs to be added.\nOnce the previous optimization has been implemented, the new bottleneck is the conditionals if cost < 0 and elif c < .... The conditionals are slow because they are unpredictable (due to the result being random). Indeed, modern CPUs try to predict the outcomes of conditionals so to avoid expensive stalls in the CPU pipeline. This is a complex topic. If you want to know more about this, then please read this great post. In practice, such a problem can be avoided using a branchless computation. This means you need to use binary operators and integer sticks so for the sign of s to change regarding the value of the condition. For example: s *= 1 - ((cost < 0) | (c < lut[cost])) * 2.\nNote that modulus are generally expensive unless the compiler know the value at compile time. They are even faster when the value is a power of two because the compiler can use bit tricks so to compute the modulus (more specifically a logical and by a pre-compiled constant). For calcEnergy, a solution is to compute the border separately so to completely avoid the modulus. Furthermore, loops can be faster when the compiler know the number of iterations at compile time (it can unroll the loops and better vectorize them). Moreover, when N is not a power of two, the RNG can be significantly slower and more complex to implement without any bias, so I assume N is a power of two.\nHere is the final code:\n# [...] Same as in the initial code\n\n@nb.njit(inline=\"always\")\ndef rol64(x, k):\n return (x << k) | (x >> (64 - k))\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss_init():\n state = np.empty(4, dtype=np.uint64)\n maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)\n for i in range(4):\n state[i] = np.random.randint(0, maxi)\n return state\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss(state):\n result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)\n t = state[1] << np.uint64(17)\n state[2] ^= state[0]\n state[3] ^= state[1]\n state[1] ^= state[2]\n state[0] ^= state[3]\n state[2] ^= t\n state[3] = rol64(state[3], np.uint64(45))\n return result\n\n@nb.njit(inline=\"always\")\ndef xoshiro_gen_values(N, state):\n '''\n Produce 2 integers between 0 and N and a simple-precision floating-point number.\n N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).\n N should be known at compile time so for this to be fast\n '''\n rand_bits = xoshiro256ss(state)\n a = (rand_bits >> np.uint64(32)) % N\n b = (rand_bits >> np.uint64(48)) % N\n c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)\n return (a, b, c)\n\n@nb.njit(nogil=True)\ndef mcmove_generic(lattice, beta, N):\n '''\n Monte Carlo move using Metropolis algorithm.\n N must be a small power of two and known at compile time\n '''\n\n state = xoshiro256ss_init()\n\n lut = np.full(16, np.nan)\n for cost in (0, 4, 8, 12, 16):\n lut[cost] = np.exp(-cost*beta)\n\n for _ in range(N):\n for __ in range(N):\n a, b, c = xoshiro_gen_values(N, state)\n s = lattice[a, b]\n dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]\n cost = 2*s*dE\n\n # Branchless computation of s\n tmp = (cost < 0) | (c < lut[cost])\n s *= 1 - tmp * 2\n\n lattice[a, b] = s\n\n return lattice\n\n@nb.njit(nogil=True)\ndef mcmove(lattice, beta, N):\n assert N in [16, 32, 64, 128]\n if N == 16: return mcmove_generic(lattice, beta, 16)\n elif N == 32: return mcmove_generic(lattice, beta, 32)\n elif N == 64: return mcmove_generic(lattice, beta, 64)\n elif N == 128: return mcmove_generic(lattice, beta, 128)\n else: raise Exception('Not implemented')\n\n@nb.njit(nogil=True)\ndef calcEnergy(lattice, N):\n '''\n Energy of a given configuration\n '''\n energy = 0 \n # Center\n for i in range(1, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]\n energy -= nb*S\n # Border\n for i in (0, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n for i in range(1, len(lattice)-1):\n for j in (0, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n return energy/2\n\n@nb.njit(nogil=True)\ndef calcMag(lattice):\n '''\n Magnetization of a given configuration\n '''\n mag = np.sum(lattice, dtype=np.int32)\n return mag\n\n# [...] Same as in the initial code\n\nI hope there is no error in the code. It is hard to check results with a different RNG.\nThe resulting code is significantly faster on my machine: it compute 4 iterations in 5.3 seconds with N=32 as opposed to 24.1 seconds. The computation is thus 4.5 times faster!\nIt is very hard to optimize the code further using Numba in Python. The computation cannot be efficiently parallelized due to the long dependency chain in mcmove.\n", "Based on the Mr. Richard's excellent answer, I found another optimization. In the ISING_model function, the code can be parallelized because we are doing the same operations independently for every temperature. To achieve this, I simply used parallel = True in the ISING_model nb.jit decorator, and used nb.prange for the temperature loop inside the function, i.e, for temperature in nb.prange(nT).\nThe resulting code is even faster... On my machine, with the setting of ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) with N=32, without parallelization, it computes in 93.1621 seconds and with parallelization, it computes in 29.9872 seconds. Another 3 times faster optimization! Which is really cool.\nI put the final code here for everyone to use.\n\nfrom timeit import default_timer as timer\nimport matplotlib.pyplot as plt\nimport numba as nb\nimport numpy as np\n\n@nb.njit(nogil=True)\ndef initialstate(N): \n ''' \n Generates a random spin configuration for initial condition in compliance with the Numba JIT compiler.\n '''\n state = np.empty((N,N),dtype=np.int8)\n for i in range(N):\n for j in range(N):\n state[i,j] = 2*np.random.randint(2)-1\n return state\n\n@nb.njit(inline=\"always\")\ndef rol64(x, k):\n return (x << k) | (x >> (64 - k))\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss_init():\n state = np.empty(4, dtype=np.uint64)\n maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)\n for i in range(4):\n state[i] = np.random.randint(0, maxi)\n return state\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss(state):\n result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)\n t = state[1] << np.uint64(17)\n state[2] ^= state[0]\n state[3] ^= state[1]\n state[1] ^= state[2]\n state[0] ^= state[3]\n state[2] ^= t\n state[3] = rol64(state[3], np.uint64(45))\n return result\n\n@nb.njit(inline=\"always\")\ndef xoshiro_gen_values(N, state):\n '''\n Produce 2 integers between 0 and N and a simple-precision floating-point number.\n N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).\n N should be known at compile time so for this to be fast\n '''\n rand_bits = xoshiro256ss(state)\n a = (rand_bits >> np.uint64(32)) % N\n b = (rand_bits >> np.uint64(48)) % N\n c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)\n return (a, b, c)\n\n@nb.njit(nogil=True)\ndef mcmove_generic(lattice, beta, N):\n '''\n Monte Carlo move using Metropolis algorithm.\n N must be a small power of two and known at compile time\n '''\n\n state = xoshiro256ss_init()\n\n lut = np.full(16, np.nan)\n for cost in (0, 4, 8, 12, 16):\n lut[cost] = np.exp(-cost*beta)\n\n for _ in range(N):\n for __ in range(N):\n a, b, c = xoshiro_gen_values(N, state)\n s = lattice[a, b]\n dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]\n cost = 2*s*dE\n\n # Branchless computation of s\n tmp = (cost < 0) | (c < lut[cost])\n s *= 1 - tmp * 2\n\n lattice[a, b] = s\n\n return lattice\n\n@nb.njit(nogil=True)\ndef mcmove(lattice, beta, N):\n assert N in [16, 32, 64, 128]\n if N == 16: return mcmove_generic(lattice, beta, 16)\n elif N == 32: return mcmove_generic(lattice, beta, 32)\n elif N == 64: return mcmove_generic(lattice, beta, 64)\n elif N == 128: return mcmove_generic(lattice, beta, 128)\n else: raise Exception('Not implemented')\n\n@nb.njit(nogil=True)\ndef calcEnergy(lattice, N):\n '''\n Energy of a given configuration\n '''\n energy = 0 \n # Center\n for i in range(1, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]\n energy -= nb*S\n # Border\n for i in (0, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n for i in range(1, len(lattice)-1):\n for j in (0, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n return energy/2\n\n@nb.njit(nogil=True)\ndef calcMag(lattice):\n '''\n Magnetization of a given configuration\n '''\n mag = np.sum(lattice, dtype=np.int32)\n return mag\n\n@nb.njit(nogil=True, parallel=True)\ndef ISING_model(nT, N, burnin, mcSteps):\n\n \"\"\" \n nT : Number of temperature points.\n N : Size of the lattice, N x N.\n burnin : Number of MC sweeps for equilibration (Burn-in).\n mcSteps : Number of MC sweeps for calculation.\n\n \"\"\"\n\n\n T = np.linspace(1.2, 3.8, nT)\n E,M,C,X = np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32)\n n1, n2 = 1/(mcSteps*N*N), 1/(mcSteps*mcSteps*N*N) \n\n\n for temperature in nb.prange(nT):\n lattice = initialstate(N) # initialise\n\n E1 = M1 = E2 = M2 = 0\n iT = 1/T[temperature]\n iT2= iT*iT\n \n for _ in range(burnin): # equilibrate\n mcmove(lattice, iT, N) # Monte Carlo moves\n\n for _ in range(mcSteps):\n mcmove(lattice, iT, N) \n Ene = calcEnergy(lattice, N) # calculate the Energy\n Mag = calcMag(lattice) # calculate the Magnetisation\n E1 += Ene\n M1 += Mag\n M2 += Mag*Mag \n E2 += Ene*Ene\n\n E[temperature] = n1*E1\n M[temperature] = n1*M1\n C[temperature] = (n1*E2 - n2*E1*E1)*iT2\n X[temperature] = (n1*M2 - n2*M1*M1)*iT\n\n return T,E,M,C,X\n\n\ndef main():\n \n N = 32\n start_time = timer()\n T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4)\n end_time = timer()\n\n print(\"Elapsed time: %g seconds\" % (end_time - start_time))\n\n f = plt.figure(figsize=(18, 10)); # \n\n # figure title\n f.suptitle(f\"Ising Model: 2D Lattice\\nSize: {N}x{N}\", fontsize=20)\n\n _ = f.add_subplot(2, 2, 1 )\n plt.plot(T, E, '-o', color='Blue') \n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Energy \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 2 )\n plt.plot(T, abs(M), '-o', color='Red')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Magnetization \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 3 )\n plt.plot(T, C, '-o', color='Green')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Specific Heat \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 4 )\n plt.plot(T, X, '-o', color='Black')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Susceptibility\", fontsize=20)\n plt.axis('tight')\n\n\n plt.show()\n\nif __name__ == '__main__':\n main()\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "montecarlo", "numba", "numpy", "performance", "python" ]
stackoverflow_0074660595_montecarlo_numba_numpy_performance_python.txt
Q: Php variable from .txt I want to create a .txt file with the following: $country = "china"; $valuta = "dollar"; And than I want to create a .php file with some text like: This is $country where they use $valuta How do I achieve this in 2 different files Maybe include once or require once
Php variable from .txt
I want to create a .txt file with the following: $country = "china"; $valuta = "dollar"; And than I want to create a .php file with some text like: This is $country where they use $valuta How do I achieve this in 2 different files Maybe include once or require once
[]
[]
[ "why not just store in .env file?\ntechnically you could create a .inc file with opening <?php and declare your variable inside. Then include this file in your script.\nif you insist on using .txt file, serialize your value inside the txt file, then inside your php script use file_get_contents() php function and unserialize the result.\n" ]
[ -1 ]
[ "php", "txt" ]
stackoverflow_0074675806_php_txt.txt
Q: JQ usage to filter content I want to filter and get the results of each index that it's name starts with "abc" and the state is "DISABLED". My file looks like this: ➜ ~ cat test.json | head { "Rules": [ { "Name": "abcd", "Arn": "arn:aws:events:eu-west-2:XXXXXX:rule/abcd", "State": "ENABLED", "ScheduleExpression": "rate(6 hours)", "EventBusName": "default" }, { "Name": "abcxxx", "Arn": "arn:aws:events:eu-west-2:XXXXXX:rule/abcxxx", "State": "DISABLED", "ScheduleExpression": "rate(12 hours)", "EventBusName": "default" } ] } I tried to use this command: cat test.json | jq -r '.[] | .[] | select(.Name | startswith("abc"))' And it's giving me whatever starts with "abc" which is good but I want it to be also .State == "DISABLED" and I want the output to be regular and not JSON. (I need to get the names of whatever starts with abc and it's state is DISABLED into a file) A: Adding a second condition with the and operator will allow you to filter with your 2 conditions: cat test.json | jq -r '.[] | .[] | select((.Name | startswith("abc")) and .State == "DISABLED") Then to extract Name field cat test.json | jq -r '.[] | .[] | select((.Name | startswith("abc")) and .State == "DISABLED") | .Name' A: You can either combine two selects or combine your conditions with and: .Rules[] | select(.State == "DISABLED" and (.Name | startswith("abc"))) | .Name .Rules[] | select(.State == "DISABLED") | .Name | select(startswith("abc"))
JQ usage to filter content
I want to filter and get the results of each index that it's name starts with "abc" and the state is "DISABLED". My file looks like this: ➜ ~ cat test.json | head { "Rules": [ { "Name": "abcd", "Arn": "arn:aws:events:eu-west-2:XXXXXX:rule/abcd", "State": "ENABLED", "ScheduleExpression": "rate(6 hours)", "EventBusName": "default" }, { "Name": "abcxxx", "Arn": "arn:aws:events:eu-west-2:XXXXXX:rule/abcxxx", "State": "DISABLED", "ScheduleExpression": "rate(12 hours)", "EventBusName": "default" } ] } I tried to use this command: cat test.json | jq -r '.[] | .[] | select(.Name | startswith("abc"))' And it's giving me whatever starts with "abc" which is good but I want it to be also .State == "DISABLED" and I want the output to be regular and not JSON. (I need to get the names of whatever starts with abc and it's state is DISABLED into a file)
[ "Adding a second condition with the and operator will allow you to filter with your 2 conditions:\ncat test.json | jq -r '.[] | .[] | select((.Name | startswith(\"abc\")) and .State == \"DISABLED\") \n\nThen to extract Name field\ncat test.json | jq -r '.[] | .[] | select((.Name | startswith(\"abc\")) and .State == \"DISABLED\") | .Name'\n\n", "You can either combine two selects or combine your conditions with and:\n.Rules[] | select(.State == \"DISABLED\" and (.Name | startswith(\"abc\"))) | .Name\n\n.Rules[] | select(.State == \"DISABLED\") | .Name | select(startswith(\"abc\"))\n\n" ]
[ 2, 1 ]
[]
[]
[ "jq", "linux" ]
stackoverflow_0074674700_jq_linux.txt
Q: Django Social Auth (Google OAuth 2) - AuthCancelled even when "Allow access" is clicked I am successfully using Django Social Auth to login users with Facebook. I'm trying to implement Google OAuth2. I have taken all the steps I know of to integrate it, but I'm getting an AuthCanceled at /complete/google-oauth2/ exception even when I click Allow access. Here's what I did to integrate it: Went to the Google API console, registered my application and got my Client ID and Client Secret. I added the correct return url http://mysite/complete/google-oauth2, before I did that I got "unauthorized redirect url" from Google. In my settings.py I have GOOGLE_OAUTH2_CLIENT_ID = '*****' GOOGLE_OAUTH2_CLIENT_SECRET = '*****' AUTHENTICATION_BACKENDS = ( 'social_auth.backends.google.GoogleOAuth2Backend', 'social_auth.backends.facebook.FacebookBackend', 'django.contrib.auth.backends.ModelBackend', ) I guess I have the other relevant settings right, as Facebook login works, and I'm able to show the window, view my app name in the Google authorization window, etc. Whan am I doing wrong? How can I fix/debug that AuthCancelled exception? A: The problem appears to be caused by a strange Google behavior: when I initially created my client ID, google gave me this client ID: 193271111225.apps.googleusercontent.com. After a conversation with the library author, he told me that his ids were much longer, so I created a new client ID with the exact same settings. The new ID generated was 193271111225-cvltnldi4hh5lmo784v2ir451b3rij7e.apps.googleusercontent.com and with it, it worked. Both IDs look the same in the console, but only the latter works. A: Use SocialAuthExceptionMiddleware middleware for proper handling social_auth exception MIDDLEWARE_CLASSES = ( ..., 'social_auth.middleware.SocialAuthExceptionMiddleware' ) A: I also faced the same error when I tried the social-auth-app-django library for social authentication. And I was receiving the following error. AuthCanceled at /social-auth/complete/google-oauth2/ Two silly mistakes lead to this error. Not defining the redirect Authorized redirect URIs in the Client ID for the Web application.(set it to http://localhost:8000/social-auth/complete/google-oauth2/) The spelling mistake for SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET So if you are receiving the same error, make sure you define the redirect authorize URIs and check your spelling.
Django Social Auth (Google OAuth 2) - AuthCancelled even when "Allow access" is clicked
I am successfully using Django Social Auth to login users with Facebook. I'm trying to implement Google OAuth2. I have taken all the steps I know of to integrate it, but I'm getting an AuthCanceled at /complete/google-oauth2/ exception even when I click Allow access. Here's what I did to integrate it: Went to the Google API console, registered my application and got my Client ID and Client Secret. I added the correct return url http://mysite/complete/google-oauth2, before I did that I got "unauthorized redirect url" from Google. In my settings.py I have GOOGLE_OAUTH2_CLIENT_ID = '*****' GOOGLE_OAUTH2_CLIENT_SECRET = '*****' AUTHENTICATION_BACKENDS = ( 'social_auth.backends.google.GoogleOAuth2Backend', 'social_auth.backends.facebook.FacebookBackend', 'django.contrib.auth.backends.ModelBackend', ) I guess I have the other relevant settings right, as Facebook login works, and I'm able to show the window, view my app name in the Google authorization window, etc. Whan am I doing wrong? How can I fix/debug that AuthCancelled exception?
[ "The problem appears to be caused by a strange Google behavior: when I initially created my client ID, google gave me this client ID: 193271111225.apps.googleusercontent.com. After a conversation with the library author, he told me that his ids were much longer, so I created a new client ID with the exact same settings. The new ID generated was 193271111225-cvltnldi4hh5lmo784v2ir451b3rij7e.apps.googleusercontent.com and with it, it worked. Both IDs look the same in the console, but only the latter works.\n", "Use SocialAuthExceptionMiddleware middleware for proper handling social_auth exception\nMIDDLEWARE_CLASSES = (\n ...,\n 'social_auth.middleware.SocialAuthExceptionMiddleware'\n)\n\n", "I also faced the same error when I tried the social-auth-app-django library for social authentication.\nAnd I was receiving the following error.\nAuthCanceled at /social-auth/complete/google-oauth2/\n\nTwo silly mistakes lead to this error.\n\nNot defining the redirect Authorized redirect URIs in the Client ID for the Web application.(set it to http://localhost:8000/social-auth/complete/google-oauth2/)\nThe spelling mistake for SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET\n\nSo if you are receiving the same error, make sure you define the redirect authorize URIs and check your spelling.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "authentication", "django", "django_socialauth" ]
stackoverflow_0016093305_authentication_django_django_socialauth.txt
Q: How to add a listener to know when the fragment change in the FragmentContainerView? Inside the activity_main file, There is FragmentContainerView, How to add a listener to know when the fragment change in the FragmentContainerView? <androidx.fragment.app.FragmentContainerView android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_weight="1" /> A: listener fragment attach event by this: navHostFragment.childFragmentManager.addFragmentOnAttachListener { _, fragment -> Log.i(TAG, "onCreate: ${fragment.javaClass.canonicalName}") } BUT, navigationUp do not invoke attach listener.
How to add a listener to know when the fragment change in the FragmentContainerView?
Inside the activity_main file, There is FragmentContainerView, How to add a listener to know when the fragment change in the FragmentContainerView? <androidx.fragment.app.FragmentContainerView android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_weight="1" />
[ "listener fragment attach event by this:\nnavHostFragment.childFragmentManager.addFragmentOnAttachListener { _, fragment ->\n Log.i(TAG, \"onCreate: ${fragment.javaClass.canonicalName}\")\n}\n\nBUT, navigationUp do not invoke attach listener.\n" ]
[ 0 ]
[]
[]
[ "android", "android_fragments" ]
stackoverflow_0070652166_android_android_fragments.txt
Q: How to add WPF UserControl onto a Form in a designer? Exception: Microsoft.DotNet.DesignTools.Client.DesignToolsServerException: Component of type UserControl1 could not be created. Make sure the type implements IComponent and provides an appropriate public constructor. Appropriate constructors either take no parameters or take a single IContainer parameter. To reproduce the problem: https://github.com/hovek/WpfApp1 and try adding UserControl1 to Form1, from the Toolbox. VS Version 17.4.2 A: I guess it's not possible to design on a Form: https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.integration.elementhost?view=windowsdesktop-7.0
How to add WPF UserControl onto a Form in a designer?
Exception: Microsoft.DotNet.DesignTools.Client.DesignToolsServerException: Component of type UserControl1 could not be created. Make sure the type implements IComponent and provides an appropriate public constructor. Appropriate constructors either take no parameters or take a single IContainer parameter. To reproduce the problem: https://github.com/hovek/WpfApp1 and try adding UserControl1 to Form1, from the Toolbox. VS Version 17.4.2
[ "I guess it's not possible to design on a Form: https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.integration.elementhost?view=windowsdesktop-7.0\n" ]
[ 0 ]
[]
[]
[ ".net_7.0", "visual_studio_2022", "winforms", "wpf" ]
stackoverflow_0074677262_.net_7.0_visual_studio_2022_winforms_wpf.txt
Q: nginx: Index file outside of the document root I'm trying to serve a directory of user-provided files with nginx, but with an index file that is outside of the document root. The reason I don't want to locate the index file inside the directory is because the directory is meant for users to drop their stuff in there. Nginx's index directive seems to only work for files inside the document root; the documentation says that the index can be an "absolute path", but my experiments tell that this is only relative to the document root. So, I tried to serve another index location with location =/index.html { alias /path/to/index.html }, and this manages to work when /index.html is directly requested, but it doesn't work if / is requested (403 with log "directory index of "/srv/docroot/" is forbidden"). It starts to work, though, if I create an empty file to /srv/docroot/index.html; then /path/to/index.html is served at /. So it seems that Nginx Checks if /srv/docroot/index.html file exists in the filesystem at the /'s location. If it exists, it does an internal redirect, and serves the /path/to/index.html at /index.html location. What is the correct way to serve an index file outside of the document root? A: index and try_files directives, provided with absolute paths, seem only to be able to point to the files inside the current document root. (With relative ../ paths you can point outside of the document root, but that's not ideal if you want to point to an absolute path in the filesystem.) It seems that only the alias directive can point outside the document root. I was able to get my setup to work with: ... location / { try_files $uri /index.html; } location =/index.html { alias /path/to/index.html; } ... This doesn't strictly answer to the question in the sense, that in this case, index.html isn't shown only when / is requested, but always when a matching file isn't found. I'm happy with this solution, but it might make sense in some cases to separate the 404 error.
nginx: Index file outside of the document root
I'm trying to serve a directory of user-provided files with nginx, but with an index file that is outside of the document root. The reason I don't want to locate the index file inside the directory is because the directory is meant for users to drop their stuff in there. Nginx's index directive seems to only work for files inside the document root; the documentation says that the index can be an "absolute path", but my experiments tell that this is only relative to the document root. So, I tried to serve another index location with location =/index.html { alias /path/to/index.html }, and this manages to work when /index.html is directly requested, but it doesn't work if / is requested (403 with log "directory index of "/srv/docroot/" is forbidden"). It starts to work, though, if I create an empty file to /srv/docroot/index.html; then /path/to/index.html is served at /. So it seems that Nginx Checks if /srv/docroot/index.html file exists in the filesystem at the /'s location. If it exists, it does an internal redirect, and serves the /path/to/index.html at /index.html location. What is the correct way to serve an index file outside of the document root?
[ "index and try_files directives, provided with absolute paths, seem only to be able to point to the files inside the current document root. (With relative ../ paths you can point outside of the document root, but that's not ideal if you want to point to an absolute path in the filesystem.)\nIt seems that only the alias directive can point outside the document root. I was able to get my setup to work with:\n...\n location / {\n try_files $uri /index.html;\n }\n location =/index.html {\n alias /path/to/index.html;\n }\n...\n\nThis doesn't strictly answer to the question in the sense, that in this case, index.html isn't shown only when / is requested, but always when a matching file isn't found. I'm happy with this solution, but it might make sense in some cases to separate the 404 error.\n" ]
[ 0 ]
[]
[]
[ "nginx" ]
stackoverflow_0074677084_nginx.txt
Q: How can I pass a parameter to my validator without breaking DI registration? (FluentValidation) I need to pass a parameter from a parent validator to a child validator, by using .SetValidator() but having trouble with registering a validator to the DI container using FluentValidation's automatic registration, when the validator is parameterized. Parent Validator: public class FooValidator: AbstractValidator<Foo> { public FooValidator() { RuleFor(foo => foo.Bar) .SetValidator(foo => new BarValidator(foo.SomeStringValue)) .When(foo => foo.Bar != null); } } Child Validator: public class BarValidator: AbstractValidator<Bar> { public BarValidator(string someStringValue) { RuleFor(bar => bar.Baz) .Must(BeValid(bar.Baz, someStringValue) .When(bar => bar.Baz != null); } private static bool BeValid(Baz baz, string someStringValue) { return baz == someStringValue; } } DI registration services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly(), ServiceLifetime.Transient); Error message System.AggregateException: Some services are not able to be constructed (Error while validating the service descriptor 'ServiceType: FluentValidation.IValidator`1[Domain.ValueObjects.Bar] Lifetime: Transient ImplementationType: Application.Common.Validators.BarValidator': Unable to resolve service for type 'System.String' while attempting to activate 'Application.Common.Validators.BarValidator'.) ---> System.InvalidOperationException: Unable to resolve service for type 'System.String' while attempting to activate 'Application.Common.Validators.BarValidator'. at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.CreateArgumentCallSites(Type implementationType, CallSiteChain callSiteChain, ParameterInfo[] parameters, Boolean throwIfCallSiteNotFound) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.CreateConstructorCallSite(ResultCache lifetime, Type serviceType, Type implementationType, CallSiteChain callSiteChain) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.TryCreateExact(ServiceDescriptor descriptor, Type serviceType, CallSiteChain callSiteChain, Int32 slot) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.GetCallSite(ServiceDescriptor serviceDescriptor, CallSiteChain callSiteChain) at Microsoft.Extensions.DependencyInjection.ServiceProvider.ValidateService(ServiceDescriptor descriptor) --- End of inner exception stack trace --- at Microsoft.Extensions.DependencyInjection.ServiceProvider.ValidateService(ServiceDescriptor descriptor) at Microsoft.Extensions.DependencyInjection.ServiceProvider..ctor(ICollection`1 serviceDescriptors, ServiceProviderOptions options) --- End of inner exception stack trace --- at Microsoft.Extensions.DependencyInjection.ServiceProvider..ctor(ICollection`1 serviceDescriptors, ServiceProviderOptions options) at Microsoft.Extensions.DependencyInjection.ServiceCollectionContainerBuilderExtensions.BuildServiceProvider(IServiceCollection services, ServiceProviderOptions options) at Microsoft.Extensions.Hosting.HostApplicationBuilder.Build() at Microsoft.AspNetCore.Builder.WebApplicationBuilder.Build() at Program.<Main>$(String[] args) in C:\Program.cs:line 9 .NET 7 FluentValidation v11.2.2 Microsoft.Extensions.DependencyInjection Any ideas? Have tried circumventing the use of automatic registration by filtering it out and registering it manually, but this changes nothing. _ = services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly(), ServiceLifetime.Transient, filter => filter.ValidatorType != typeof(BarValidator)); _ = services.AddTransient<IValidator<Bar>>(_ => new BarValidator("")); A: You've got yourself a chicken and egg problem. The AbstractValidator<T> can't have parameters in its ctor, without them also being added to the DI container. At the time the container validates itself, it doesn't know how to instantiate an instance of AbstractValidator<T> because it can't resolve its string dependency. AbstractValidator<T> isn't the right tool for the job. Have a look at Reusable Property Validators. Using PropertyValidator<Foo, Bar> you can access SomeStringValue from ValidationContext<Foo>. Child Validator: public class BarValidator : PropertyValidator<Foo, Bar> { public override bool IsValid(ValidationContext<Foo> context, Bar bar) { if (bar != null && bar.Baz != context.InstanceToValidate.SomeStringValue) { context.MessageFormatter.AppendArgument(nameof(Foo.SomeStringValue), context.InstanceToValidate.SomeStringValue); return false; } return true; } public override string Name => "BazValidator"; protected override string GetDefaultMessageTemplate(string errorCode) => "Baz must be equal to {SomeStringValue}."; } Parent Validator: public class FooValidator : AbstractValidator<Foo> { public FooValidator() { RuleFor(foo => foo.Bar).SetValidator(new BarValidator()); } } A: You can try using Fluent Validators .WithState method and it will pass an additional object: RuleFor(foo => foo.Bar) .SetValidator(foo => new BarValidator()) .When(foo => foo.Bar != null) .WithState(foo => foo.SomeStringValue); And then the child validator will look something like this where you won't be passing a parameter to BarValidator and we will use GetState method on the Validation Context object: public class BarValidator: AbstractValidator<Bar> { public BarValidator() { RuleFor(bar => bar.Baz) .Must(BeValid) .When(bar => bar.Baz != null); } private bool BeValid(Baz baz, ValidationContext<Bar> context) { string someStringValue = context.GetState<string>(); return baz == someStringValue; } } A: The easiest way to achieve this is to create a factory that will create the validator with the required parameters. public class FooValidator: AbstractValidator<Foo> { public FooValidator() { RuleFor(foo => foo.Bar) .SetValidator(foo => MyValidatorFactory.CreateBarValidator(foo.SomeStringValue)) .When(foo => foo.Bar != null); } } public static class MyValidatorFactory { public static IValidator<Bar> CreateBarValidator(string someStringValue) { return new BarValidator(someStringValue); } } public class BarValidator: AbstractValidator<Bar> { public BarValidator(string someStringValue) { RuleFor(bar => bar.Baz) .Must(BeValid(bar.Baz, someStringValue) .When(bar => bar.Baz != null); } private static bool BeValid(Baz baz, string someStringValue) { return baz == someStringValue; } } Finally, make sure you add the factory to the DI container. services.AddTransient<Func<string, IValidator<Bar>>>(MyValidatorFactory.CreateBarValidator);
How can I pass a parameter to my validator without breaking DI registration? (FluentValidation)
I need to pass a parameter from a parent validator to a child validator, by using .SetValidator() but having trouble with registering a validator to the DI container using FluentValidation's automatic registration, when the validator is parameterized. Parent Validator: public class FooValidator: AbstractValidator<Foo> { public FooValidator() { RuleFor(foo => foo.Bar) .SetValidator(foo => new BarValidator(foo.SomeStringValue)) .When(foo => foo.Bar != null); } } Child Validator: public class BarValidator: AbstractValidator<Bar> { public BarValidator(string someStringValue) { RuleFor(bar => bar.Baz) .Must(BeValid(bar.Baz, someStringValue) .When(bar => bar.Baz != null); } private static bool BeValid(Baz baz, string someStringValue) { return baz == someStringValue; } } DI registration services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly(), ServiceLifetime.Transient); Error message System.AggregateException: Some services are not able to be constructed (Error while validating the service descriptor 'ServiceType: FluentValidation.IValidator`1[Domain.ValueObjects.Bar] Lifetime: Transient ImplementationType: Application.Common.Validators.BarValidator': Unable to resolve service for type 'System.String' while attempting to activate 'Application.Common.Validators.BarValidator'.) ---> System.InvalidOperationException: Unable to resolve service for type 'System.String' while attempting to activate 'Application.Common.Validators.BarValidator'. at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.CreateArgumentCallSites(Type implementationType, CallSiteChain callSiteChain, ParameterInfo[] parameters, Boolean throwIfCallSiteNotFound) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.CreateConstructorCallSite(ResultCache lifetime, Type serviceType, Type implementationType, CallSiteChain callSiteChain) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.TryCreateExact(ServiceDescriptor descriptor, Type serviceType, CallSiteChain callSiteChain, Int32 slot) at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.GetCallSite(ServiceDescriptor serviceDescriptor, CallSiteChain callSiteChain) at Microsoft.Extensions.DependencyInjection.ServiceProvider.ValidateService(ServiceDescriptor descriptor) --- End of inner exception stack trace --- at Microsoft.Extensions.DependencyInjection.ServiceProvider.ValidateService(ServiceDescriptor descriptor) at Microsoft.Extensions.DependencyInjection.ServiceProvider..ctor(ICollection`1 serviceDescriptors, ServiceProviderOptions options) --- End of inner exception stack trace --- at Microsoft.Extensions.DependencyInjection.ServiceProvider..ctor(ICollection`1 serviceDescriptors, ServiceProviderOptions options) at Microsoft.Extensions.DependencyInjection.ServiceCollectionContainerBuilderExtensions.BuildServiceProvider(IServiceCollection services, ServiceProviderOptions options) at Microsoft.Extensions.Hosting.HostApplicationBuilder.Build() at Microsoft.AspNetCore.Builder.WebApplicationBuilder.Build() at Program.<Main>$(String[] args) in C:\Program.cs:line 9 .NET 7 FluentValidation v11.2.2 Microsoft.Extensions.DependencyInjection Any ideas? Have tried circumventing the use of automatic registration by filtering it out and registering it manually, but this changes nothing. _ = services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly(), ServiceLifetime.Transient, filter => filter.ValidatorType != typeof(BarValidator)); _ = services.AddTransient<IValidator<Bar>>(_ => new BarValidator(""));
[ "You've got yourself a chicken and egg problem. The AbstractValidator<T> can't have parameters in its ctor, without them also being added to the DI container. At the time the container validates itself, it doesn't know how to instantiate an instance of AbstractValidator<T> because it can't resolve its string dependency.\nAbstractValidator<T> isn't the right tool for the job. Have a look at Reusable Property Validators. Using PropertyValidator<Foo, Bar> you can access SomeStringValue from ValidationContext<Foo>.\nChild Validator:\npublic class BarValidator : PropertyValidator<Foo, Bar>\n{\n public override bool IsValid(ValidationContext<Foo> context, Bar bar)\n {\n if (bar != null && bar.Baz != context.InstanceToValidate.SomeStringValue)\n {\n context.MessageFormatter.AppendArgument(nameof(Foo.SomeStringValue), context.InstanceToValidate.SomeStringValue);\n return false;\n }\n\n return true;\n }\n\n public override string Name => \"BazValidator\";\n\n protected override string GetDefaultMessageTemplate(string errorCode)\n => \"Baz must be equal to {SomeStringValue}.\";\n}\n\nParent Validator:\npublic class FooValidator : AbstractValidator<Foo>\n{\n public FooValidator()\n {\n RuleFor(foo => foo.Bar).SetValidator(new BarValidator());\n }\n}\n\n", "You can try using Fluent Validators .WithState method and it will pass an additional object:\n RuleFor(foo => foo.Bar)\n .SetValidator(foo => new BarValidator())\n .When(foo => foo.Bar != null)\n .WithState(foo => foo.SomeStringValue);\n\nAnd then the child validator will look something like this where you won't be passing a parameter to BarValidator and we will use GetState method on the Validation Context object:\n public class BarValidator: AbstractValidator<Bar>\n {\n public BarValidator()\n{\n RuleFor(bar => bar.Baz)\n .Must(BeValid)\n .When(bar => bar.Baz != null);\n}\n\nprivate bool BeValid(Baz baz, ValidationContext<Bar> context)\n{\nstring someStringValue = context.GetState<string>();\nreturn baz == someStringValue; \n }\n\n}\n", "The easiest way to achieve this is to create a factory that will create the validator with the required parameters.\npublic class FooValidator: AbstractValidator<Foo>\n{\n public FooValidator()\n {\n RuleFor(foo => foo.Bar)\n .SetValidator(foo => MyValidatorFactory.CreateBarValidator(foo.SomeStringValue))\n .When(foo => foo.Bar != null);\n }\n}\n\npublic static class MyValidatorFactory\n{\n public static IValidator<Bar> CreateBarValidator(string someStringValue)\n {\n return new BarValidator(someStringValue);\n }\n}\n\npublic class BarValidator: AbstractValidator<Bar>\n{\n public BarValidator(string someStringValue)\n {\n RuleFor(bar => bar.Baz)\n .Must(BeValid(bar.Baz, someStringValue)\n .When(bar => bar.Baz != null);\n }\n \n private static bool BeValid(Baz baz, string someStringValue)\n {\n return baz == someStringValue; \n }\n\n}\n\nFinally, make sure you add the factory to the DI container.\n services.AddTransient<Func<string, IValidator<Bar>>>(MyValidatorFactory.CreateBarValidator);\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ ".net_7.0", "c#", "dependency_injection", "fluentvalidation" ]
stackoverflow_0074575086_.net_7.0_c#_dependency_injection_fluentvalidation.txt
Q: Detected Vetur enabled. Consider disabling Vetur and use @volar-plugins/vetur instead VSCode is giving the following warning Detected Vetur enabled. Consider disabling Vetur and use @volar-plugins/vetur instead. What is this for? What do I need to do to turn it off A: TLDR: uninstall Vetur and keep Volar (official recommended extension). Volar is the official VScode plugin to use since few months, hence why the core team is pushing the officially supported one. It was initially meant for Vue3 but it also works with Vue2. As explained here: https://vuejs.org/guide/typescript/overview.html#ide-support Also this github issue: https://github.com/vuejs/vetur/issues/3476#issue-1300202219 Official page of the project btw A: In your VScode extensions, find 'Vetur' and Disable. Reload VScode or you can use 'Vetur' in another way @volar-plugins/vetur. A: disable the plugin vetur by Pine wu, or uinstall it. just keep volar
Detected Vetur enabled. Consider disabling Vetur and use @volar-plugins/vetur instead
VSCode is giving the following warning Detected Vetur enabled. Consider disabling Vetur and use @volar-plugins/vetur instead. What is this for? What do I need to do to turn it off
[ "TLDR: uninstall Vetur and keep Volar (official recommended extension).\n\nVolar is the official VScode plugin to use since few months, hence why the core team is pushing the officially supported one.\nIt was initially meant for Vue3 but it also works with Vue2.\nAs explained here: https://vuejs.org/guide/typescript/overview.html#ide-support\nAlso this github issue: https://github.com/vuejs/vetur/issues/3476#issue-1300202219\n\nOfficial page of the project btw\n\n", "In your VScode extensions, find 'Vetur' and Disable. Reload VScode\nor you can use 'Vetur' in another way @volar-plugins/vetur.\n", "disable the plugin vetur by Pine wu, or uinstall it.\njust keep volar\n" ]
[ 2, 0, 0 ]
[]
[]
[ "vetur", "visual_studio_code", "volar", "vue.js" ]
stackoverflow_0074405512_vetur_visual_studio_code_volar_vue.js.txt
Q: Get the hostname of the server in get-wmiobject -class "Win32_LogicalDisk" I am using the following to get if the drive has less than specified space: $pc = Get-Content "C:\Users\user\Desktop\computers.txt" $disks = get-wmiobject -class "Win32_LogicalDisk" -namespace "root\CIMV2" -computername $pc $results = foreach ($disk in $disks) { if ($disk.Size -gt 0) { $size = [math]::round($disk.Size/1GB, 0) $free = [math]::round($disk.FreeSpace/1GB, 0) [PSCustomObject]@{ Drive = $disk.Name "Free" = "{1:P0}" -f $free, ($free/$size) } } } $results | Where-Object{( $_."Drive" -ne "Z:") -and ($_."Drive" -ne "Y:") } | Where-Object{ $_."Free" -le 80 } | Out-GridView How to make this print the server name next to the disks found? NOTE- I cannot use any other method except the above. A: You would just need to include the PSComputerName property to the output objects: $results = foreach ($disk in $disks) { if ($disk.Size -gt 0) { $size = [math]::round($disk.Size / 1GB, 0) $free = [math]::round($disk.FreeSpace / 1GB, 0) [PSCustomObject]@{ ComputerName = $disk.PSComputerName Drive = $disk.Name Free = "{0:P0}" -f ($free / $size) } } }
Get the hostname of the server in get-wmiobject -class "Win32_LogicalDisk"
I am using the following to get if the drive has less than specified space: $pc = Get-Content "C:\Users\user\Desktop\computers.txt" $disks = get-wmiobject -class "Win32_LogicalDisk" -namespace "root\CIMV2" -computername $pc $results = foreach ($disk in $disks) { if ($disk.Size -gt 0) { $size = [math]::round($disk.Size/1GB, 0) $free = [math]::round($disk.FreeSpace/1GB, 0) [PSCustomObject]@{ Drive = $disk.Name "Free" = "{1:P0}" -f $free, ($free/$size) } } } $results | Where-Object{( $_."Drive" -ne "Z:") -and ($_."Drive" -ne "Y:") } | Where-Object{ $_."Free" -le 80 } | Out-GridView How to make this print the server name next to the disks found? NOTE- I cannot use any other method except the above.
[ "You would just need to include the PSComputerName property to the output objects:\n$results = foreach ($disk in $disks)\n{ \n if ($disk.Size -gt 0)\n {\n $size = [math]::round($disk.Size / 1GB, 0)\n $free = [math]::round($disk.FreeSpace / 1GB, 0)\n\n [PSCustomObject]@{\n ComputerName = $disk.PSComputerName\n Drive = $disk.Name\n Free = \"{0:P0}\" -f ($free / $size)\n }\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "powershell" ]
stackoverflow_0074677750_powershell.txt
Q: How to get data from django orm inside an asynchronous function? I need to retrieve data from the database inside an asynchronous function. If I retrieve only one object by executing e.g: users = await sync_to_async(Creators.objects.first)() everything works as it should. But if the response contains multiple objects, I get an error. @sync_to_async def get_creators(): return Creators.objects.all() async def CreateBotAll(): users = await get_creators() for user in users: print(user) Tracing: Traceback (most recent call last): File "/home/django/django_venv/lib/python3.8/site- packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/home/django/django_venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/django/django_venv/src/reseller/views.py", line 29, in test asyncio.run(TgAssistant.CreateBotAll()) File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/django/django_venv/src/reseller/TgAssistant.py", line 84, in CreateBotAll for user in users: File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 280, in __iter__ self._fetch_all() File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 1324, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 51, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1173, in execute_sql cursor = self.connection.cursor() File "/home/django/django_venv/lib/python3.8/site-packages/django/utils/asyncio.py", line 31, in inner raise SynchronousOnlyOperation(message) django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. I made it work that way: @sync_to_async def get_creators(): sql = Creators.objects.all() x = [creator for creator in sql] return x Isn't there a more elegant solution? A: You may try wrap get_creators response into list: @sync_to_async def get_creators(): return list(Creators.objects.all()) A: Since Django 4.1 you can do the following: async for creator in Creators.objects.all(): print(creator) And you can replace this with filter and the like as long as the expression doesn't cause the query to be evaluated. There are also async versions of get, delete etc prefixed with an 'a' so your users = await sync_to_async(Creators.objects.first)() can be replaced with: user = await Creators.object.afirst()
How to get data from django orm inside an asynchronous function?
I need to retrieve data from the database inside an asynchronous function. If I retrieve only one object by executing e.g: users = await sync_to_async(Creators.objects.first)() everything works as it should. But if the response contains multiple objects, I get an error. @sync_to_async def get_creators(): return Creators.objects.all() async def CreateBotAll(): users = await get_creators() for user in users: print(user) Tracing: Traceback (most recent call last): File "/home/django/django_venv/lib/python3.8/site- packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/home/django/django_venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/django/django_venv/src/reseller/views.py", line 29, in test asyncio.run(TgAssistant.CreateBotAll()) File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/django/django_venv/src/reseller/TgAssistant.py", line 84, in CreateBotAll for user in users: File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 280, in __iter__ self._fetch_all() File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 1324, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line 51, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1173, in execute_sql cursor = self.connection.cursor() File "/home/django/django_venv/lib/python3.8/site-packages/django/utils/asyncio.py", line 31, in inner raise SynchronousOnlyOperation(message) django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. I made it work that way: @sync_to_async def get_creators(): sql = Creators.objects.all() x = [creator for creator in sql] return x Isn't there a more elegant solution?
[ "You may try wrap get_creators response into list:\n@sync_to_async\ndef get_creators():\n return list(Creators.objects.all())\n\n", "Since Django 4.1 you can do the following:\nasync for creator in Creators.objects.all():\n print(creator)\n\nAnd you can replace this with filter and the like as long as the expression doesn't cause the query to be evaluated.\nThere are also async versions of get, delete etc prefixed with an 'a' so your\nusers = await sync_to_async(Creators.objects.first)()\n\ncan be replaced with:\nuser = await Creators.object.afirst()\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0071489479_django_python.txt
Q: map json schema to c# model with NJsonSchema throws System.InvalidCastException Im trying to convert really simple json schema to my c# model using nuget from this link Im using example code from the github repo such as: var schema = """{"$schema":"http://json-schema.org/draft-04/schema#","title":"Person","type":"object","additionalProperties":false,"required":["FirstName","LastName"],"properties":{"FirstName":{"type":"string"},"LastName":{"type":"string"}},"definitions":{}}"""; var generator = new CSharpGenerator(schema); var file = generator.GenerateFile("mytest"); // on this line I got an exception Console.WriteLine(file); Console.ReadLine(); this is my json schema: Notice: Im using .net 7 , you can notice that Im using tripple string in order to accept json with double quotes. Exception that I get: System.InvalidCastException: 'Unable to cast object of type 'System.String' to type 'NJsonSchema.JsonSchema'.' What I did wrong here? A: Tested First Install NJsonSchema.CodeGeneration.CSharp with Version="10.8.0" in your project use below code var schemaData = @"{ ""type"": ""object"", ""properties"": { ""Bar"": { ""oneOf"": [ { ""$ref"": ""#/definitions/StringEnum"" } ] } }, ""definitions"": { ""StringEnum"": { ""type"": ""string"", ""enum"": [ ""0562"", ""0532"" ], ""description"": """" } } }"; var schema = await NJsonSchema.JsonSchema.FromJsonAsync(schemaData); var generator = new CSharpGenerator(schema); var file = generator.GenerateFile("mytest"); Console.WriteLine(file); Console.ReadLine();
map json schema to c# model with NJsonSchema throws System.InvalidCastException
Im trying to convert really simple json schema to my c# model using nuget from this link Im using example code from the github repo such as: var schema = """{"$schema":"http://json-schema.org/draft-04/schema#","title":"Person","type":"object","additionalProperties":false,"required":["FirstName","LastName"],"properties":{"FirstName":{"type":"string"},"LastName":{"type":"string"}},"definitions":{}}"""; var generator = new CSharpGenerator(schema); var file = generator.GenerateFile("mytest"); // on this line I got an exception Console.WriteLine(file); Console.ReadLine(); this is my json schema: Notice: Im using .net 7 , you can notice that Im using tripple string in order to accept json with double quotes. Exception that I get: System.InvalidCastException: 'Unable to cast object of type 'System.String' to type 'NJsonSchema.JsonSchema'.' What I did wrong here?
[ "Tested\nFirst Install NJsonSchema.CodeGeneration.CSharp with Version=\"10.8.0\" in your project\nuse below code\nvar schemaData = @\"{\n \"\"type\"\": \"\"object\"\",\n \"\"properties\"\": {\n \"\"Bar\"\": {\n \"\"oneOf\"\": [\n {\n \"\"$ref\"\": \"\"#/definitions/StringEnum\"\"\n }\n ]\n }\n },\n \"\"definitions\"\": {\n \"\"StringEnum\"\": {\n \"\"type\"\": \"\"string\"\",\n \"\"enum\"\": [\n \"\"0562\"\",\n \"\"0532\"\"\n ],\n \"\"description\"\": \"\"\"\"\n }\n }\n }\";\n var schema = await \n NJsonSchema.JsonSchema.FromJsonAsync(schemaData);\n var generator = new CSharpGenerator(schema);\n var file = generator.GenerateFile(\"mytest\"); \n Console.WriteLine(file);\n Console.ReadLine();\n\n\n" ]
[ 0 ]
[]
[]
[ "c#", "json", "jsonschema", "njsonschema" ]
stackoverflow_0074666745_c#_json_jsonschema_njsonschema.txt
Q: Kernel died with rasterio.open().read() on geo tiff images When I tried to open and read a certain geo tiff image with gdal and rasterio, I can read and do things like img.meta and img.descriptions. But when I tried to do img.read() with rasterio or img.GetRasterBand(1).ReadAsArray() with gdal the kernel always died after a certain runtime. It's not happening to all geo tiff images but some. Could anyone help me? Thanks! Python version: 3.9 System: Mac Big Sur Version 11.3.1 Raster information: File size: 400 MB Band number: 3 Coordinate reference system: EPSG:26917 Metadata: {'driver': 'GTiff', 'dtype': 'uint8', 'nodata': 255.0, 'width': 580655, 'height': 444631, 'count': 3, 'crs': CRS.from_epsg(26917), 'transform': Affine(0.08000000000000004, 0.0, 607455.9245999996, 0.0, -0.080000000000001, 4859850.802499999)} Raster description: (None, None, None) Geotransform : | 0.08, 0.00, 607455.92| | 0.00,-0.08, 4859850.80| | 0.00, 0.00, 1.00| # with rasterio img = rasterio.open('certain_tiff_file.tif') metadata = img.meta print('Metadata: {metadata}\n'.format(metadata=metadata)) # kernel died if run the line below full_img = img.read() # with gdal img = gdal.Open('certain_tiff_file.tif') img_band1 = img.GetRasterBand(1) img_band2 = img.GetRasterBand(2) img_band3 = img.GetRasterBand(3) # kernel died if run the line below array = img_band1.ReadAsArray() A: I had the same problem when reading large .tiff files. Following what @Val said in the comments I checked for how much free RAM memory I had as described here: import psutil psutil.virtual_memory() And indeed my issue was that I was running out of RAM. You may try to use del arr once you're done with some array and you don't need to use that anymore to clean a bit of memory. Might be worth looking into gc.collect as well.
Kernel died with rasterio.open().read() on geo tiff images
When I tried to open and read a certain geo tiff image with gdal and rasterio, I can read and do things like img.meta and img.descriptions. But when I tried to do img.read() with rasterio or img.GetRasterBand(1).ReadAsArray() with gdal the kernel always died after a certain runtime. It's not happening to all geo tiff images but some. Could anyone help me? Thanks! Python version: 3.9 System: Mac Big Sur Version 11.3.1 Raster information: File size: 400 MB Band number: 3 Coordinate reference system: EPSG:26917 Metadata: {'driver': 'GTiff', 'dtype': 'uint8', 'nodata': 255.0, 'width': 580655, 'height': 444631, 'count': 3, 'crs': CRS.from_epsg(26917), 'transform': Affine(0.08000000000000004, 0.0, 607455.9245999996, 0.0, -0.080000000000001, 4859850.802499999)} Raster description: (None, None, None) Geotransform : | 0.08, 0.00, 607455.92| | 0.00,-0.08, 4859850.80| | 0.00, 0.00, 1.00| # with rasterio img = rasterio.open('certain_tiff_file.tif') metadata = img.meta print('Metadata: {metadata}\n'.format(metadata=metadata)) # kernel died if run the line below full_img = img.read() # with gdal img = gdal.Open('certain_tiff_file.tif') img_band1 = img.GetRasterBand(1) img_band2 = img.GetRasterBand(2) img_band3 = img.GetRasterBand(3) # kernel died if run the line below array = img_band1.ReadAsArray()
[ "I had the same problem when reading large .tiff files.\nFollowing what @Val said in the comments I checked for how much free RAM memory I had as described here:\nimport psutil\npsutil.virtual_memory()\n\nAnd indeed my issue was that I was running out of RAM. You may try to use del arr once you're done with some array and you don't need to use that anymore to clean a bit of memory. Might be worth looking into gc.collect as well.\n" ]
[ 0 ]
[]
[]
[ "gdal", "gis", "kernel", "python", "rasterio" ]
stackoverflow_0068667679_gdal_gis_kernel_python_rasterio.txt
Q: How do you setup AWS Cloudfront to provide custom access to S3 bucket with signed cookies using wildcards AWS Cloudfront with Custom Cookies using Wildcards in Lambda Function: The problem: On AWS s3 Storage to provide granular access control the preferred method is to use AWS Cloudfront with signed URL's. Here is a good example how to setup cloudfront a bit old though, so you need to use the recommended settings not the legacy and copy the generated policy down to S3. https://medium.com/@himanshuarora/protect-private-content-using-cloudfront-signed-cookies-fd9674faec3 I have provided an example below on how to create one of these signed URL's using Python and the newest libraries. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html However this requires the creation of a signed URL for each item in the S3 bucket. To give wildcard access to a directory of items in the S3 bucket you need use what is called a custom Policy. I could not find any working examples of this code using Python, many of the online expamples have librarys that are depreciated. But attached is a working example. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html I had trouble getting the python cryptography package to work by building the lambda function on an Amazon Linux 2 instance on AWS EC2. Always came up with an error of a missing library. So I use Klayers for AWS and worked https://github.com/keithrozario/Klayers/tree/master/deployments. A working example for cookies for a canned policy (Means only a signed URL specific for each S3 file) https://www.velotio.com/engineering-blog/s3-cloudfront-to-deliver-static-asset A: My code for cookies for a custom policy (Means a single policy statement with URL wildcards etc). You must use the Cryptology package type examples but the private_key.signer function was depreciated for a new private_key.sign function with an extra argument. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#signing from cryptography.hazmat.primitives import serialization from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes import base64 import datetime class CFSigner: def sign_rsa(self, message): private_key = serialization.load_pem_private_key( self.keyfile, password=None, backend=default_backend() ) signature = private_key.sign(message.encode( "utf-8"), padding.PKCS1v15(), hashes.SHA1()) return signature def _sign_string(self, message, private_key_file=None, private_key_string=None): if private_key_file: self.keyfile = open(private_key_file, "rb").read() elif private_key_string: self.keyfile = private_key_string.encode("utf-8") return self.sign_rsa(message) def _url_base64_encode(self, msg): msg_base64 = base64.b64encode(msg).decode("utf-8") msg_base64 = msg_base64.replace("+", "-") msg_base64 = msg_base64.replace("=", "_") msg_base64 = msg_base64.replace("/", "~") return msg_base64 def generate_signature(self, policy, private_key_file=None): signature = self._sign_string(policy, private_key_file) encoded_signature = self._url_base64_encode(signature) return encoded_signature def create_signed_cookies2(self, url, private_key_file, keypair_id, expires_at): policy = self.create_custom_policy(url, expires_at) encoded_policy = self._url_base64_encode( policy.encode("utf-8")) signature = self.generate_signature( policy, private_key_file=private_key_file) cookies = { "CloudFront-Policy": encoded_policy, "CloudFront-Signature": signature, "CloudFront-Key-Pair-Id": keypair_id, } return cookies def sign_to_cloudfront(object_url, expires_at): cf = CFSigner() url = cf.create_signed_url( url=object_url, keypair_id="xxxxxxxxxx", expire_time=expires_at, private_key_file="xxx.pem", ) return url def create_signed_cookies(self, object_url, expires_at): cookies = self.create_signed_cookies2( url=object_url, private_key_file="xxx.pem", keypair_id="xxxxxxxxxx", expires_at=expires_at, ) return cookies def create_custom_policy(self, url, expires_at): return ( '{"Statement":[{"Resource":"' + url + '","Condition":{"DateLessThan":{"AWS:EpochTime":' + str(round(expires_at.timestamp())) + "}}}]}" ) def lambda_handler(event, context): response = event["Records"][0]["cf"]["response"] headers = response.get("headers", None) cf = CFSigner() path = "https://www.example.com/*" expire = datetime.datetime.now() + datetime.timedelta(days=3) signed_cookies = cf.create_signed_cookies(path, expire) headers["set-cookie"] = [{ "key": "set-cookie", "value": "CloudFront-Policy={signed_cookies.get('CloudFront-Policy')}" }] headers["Set-cookie"] = [{ "key": "Set-cookie", "value": "CloudFront-Signature={signed_cookies.get('CloudFront-Signature')}", }] headers["Set-Cookie"] = [{ "key": "Set-Cookie", "value": "CloudFront-Key-Pair-Id={signed_cookies.get('CloudFront-Key-Pair-Id')}", }] print(response) return response ```
How do you setup AWS Cloudfront to provide custom access to S3 bucket with signed cookies using wildcards
AWS Cloudfront with Custom Cookies using Wildcards in Lambda Function: The problem: On AWS s3 Storage to provide granular access control the preferred method is to use AWS Cloudfront with signed URL's. Here is a good example how to setup cloudfront a bit old though, so you need to use the recommended settings not the legacy and copy the generated policy down to S3. https://medium.com/@himanshuarora/protect-private-content-using-cloudfront-signed-cookies-fd9674faec3 I have provided an example below on how to create one of these signed URL's using Python and the newest libraries. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html However this requires the creation of a signed URL for each item in the S3 bucket. To give wildcard access to a directory of items in the S3 bucket you need use what is called a custom Policy. I could not find any working examples of this code using Python, many of the online expamples have librarys that are depreciated. But attached is a working example. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html I had trouble getting the python cryptography package to work by building the lambda function on an Amazon Linux 2 instance on AWS EC2. Always came up with an error of a missing library. So I use Klayers for AWS and worked https://github.com/keithrozario/Klayers/tree/master/deployments. A working example for cookies for a canned policy (Means only a signed URL specific for each S3 file) https://www.velotio.com/engineering-blog/s3-cloudfront-to-deliver-static-asset
[ "My code for cookies for a custom policy (Means a single policy statement with URL wildcards etc). You must use the Cryptology\npackage type examples but the private_key.signer function was depreciated for a new private_key.sign function with an extra\nargument. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#signing\n from cryptography.hazmat.primitives import serialization\n from cryptography.hazmat.backends import default_backend\n from cryptography.hazmat.primitives import hashes\n import base64\n import datetime\n\n\n class CFSigner:\n def sign_rsa(self, message):\n private_key = serialization.load_pem_private_key(\n self.keyfile, password=None, backend=default_backend()\n )\n\n signature = private_key.sign(message.encode(\n \"utf-8\"), padding.PKCS1v15(), hashes.SHA1())\n return signature\n\n def _sign_string(self, message, private_key_file=None, private_key_string=None):\n if private_key_file:\n self.keyfile = open(private_key_file, \"rb\").read()\n elif private_key_string:\n self.keyfile = private_key_string.encode(\"utf-8\")\n return self.sign_rsa(message)\n\n def _url_base64_encode(self, msg):\n msg_base64 = base64.b64encode(msg).decode(\"utf-8\")\n msg_base64 = msg_base64.replace(\"+\", \"-\")\n msg_base64 = msg_base64.replace(\"=\", \"_\")\n msg_base64 = msg_base64.replace(\"/\", \"~\")\n return msg_base64\n\n def generate_signature(self, policy, private_key_file=None):\n signature = self._sign_string(policy, private_key_file)\n encoded_signature = self._url_base64_encode(signature)\n return encoded_signature\n\n def create_signed_cookies2(self, url, private_key_file, keypair_id, expires_at):\n policy = self.create_custom_policy(url, expires_at)\n encoded_policy = self._url_base64_encode(\n policy.encode(\"utf-8\"))\n\n signature = self.generate_signature(\n policy, private_key_file=private_key_file)\n\n cookies = {\n \"CloudFront-Policy\": encoded_policy,\n \"CloudFront-Signature\": signature,\n \"CloudFront-Key-Pair-Id\": keypair_id,\n }\n return cookies\n\n def sign_to_cloudfront(object_url, expires_at):\n cf = CFSigner()\n url = cf.create_signed_url(\n url=object_url,\n keypair_id=\"xxxxxxxxxx\",\n expire_time=expires_at,\n private_key_file=\"xxx.pem\",\n )\n return url\n\n def create_signed_cookies(self, object_url, expires_at):\n cookies = self.create_signed_cookies2(\n url=object_url,\n private_key_file=\"xxx.pem\",\n keypair_id=\"xxxxxxxxxx\",\n expires_at=expires_at,\n )\n return cookies\n\n def create_custom_policy(self, url, expires_at):\n return (\n '{\"Statement\":[{\"Resource\":\"'\n + url\n + '\",\"Condition\":{\"DateLessThan\":{\"AWS:EpochTime\":'\n + str(round(expires_at.timestamp()))\n + \"}}}]}\"\n )\n\n def lambda_handler(event, context):\n response = event[\"Records\"][0][\"cf\"][\"response\"]\n headers = response.get(\"headers\", None)\n cf = CFSigner()\n path = \"https://www.example.com/*\"\n expire = datetime.datetime.now() + datetime.timedelta(days=3)\n signed_cookies = cf.create_signed_cookies(path, expire)\n headers[\"set-cookie\"] = [{\n \"key\": \"set-cookie\",\n \"value\": \"CloudFront-Policy={signed_cookies.get('CloudFront-Policy')}\"\n }]\n headers[\"Set-cookie\"] = [{\n \"key\": \"Set-cookie\",\n \"value\": \"CloudFront-Signature={signed_cookies.get('CloudFront-Signature')}\",\n }]\n headers[\"Set-Cookie\"] = [{\n \"key\": \"Set-Cookie\",\n \"value\": \"CloudFront-Key-Pair-Id={signed_cookies.get('CloudFront-Key-Pair-Id')}\",\n }]\n print(response)\n return response ```\n\n" ]
[ 0 ]
[]
[]
[ "amazon_cloudfront", "amazon_s3", "amazon_web_services", "cookies", "signed" ]
stackoverflow_0074677815_amazon_cloudfront_amazon_s3_amazon_web_services_cookies_signed.txt
Q: How to get name of product from many to many even if it is not in the related table SQL or EF linq query I want to get all product names with category if even product doesn't have a category get informatoin for creation from here public class Product { public int ProductId { get; set; } public string Name { get; set; } public virtual ICollection<CategoryProduct> CategoryProducts { get; set; } } public class CategoryProduct { public int CategoryProductId { get; set; } public string Name { get; set; } public virtual ICollection<Product> Products { get; set; } } internal class EFDbContext : DbContext, IDBProductContext { public DbSet<Product> Products { get; set; } public DbSet<CategoryProduct> CategoryProducts { get; set ; } public EFDbContext() { Database.SetInitializer<EFDbContext>(new DropCreateDatabaseIfModelChanges<EFDbContext>()); } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Product>().HasMany(p => p.CategoryProducts) .WithMany(c => c.Products) .Map(pc => { pc.MapLeftKey("ProductRefId"); pc.MapRightKey("CategoryProductRefId"); pc.ToTable("CategoryProductTable"); }); base.OnModelCreating(modelBuilder); } } If I write a SQL query like this, I get all of them from joined EF table SELECT p.Name, cp.Name FROM CategoryProductTable AS cpt, CategoryProducts AS cp, Products as p WHERE p.ProductId = cpt.ProductRefId AND cp.CategoryProductId = cpt.CategoryProductRefId but I want to get all from product names with category if even product doesn't have a category UPDATED: thanks for SQL solution @Nick Scotney, but now I would want know how it do it in Linq A: Could you be after a "LEFT OUTER JOIN" in your Sql? SELECT p.Name, cp.Name FROM Products p LEFT OUTER JOIN CategoryProductTable cpt ON p.ProductId = cpt.ProductRefId LEFT OUTER JOIN CategoryProducts cp ON cpt.CategoryProductRefId = cp.CategoryProductId In the above SQL, everything from products will be selected, regardless of if there is a Category or not. When there isn't a category, cp.Name will simply return NULL. A: You would want to remove p.ProductId = cpt.ProductRefId predicate if you want to show all products. SELECT p.Name, cp.Name FROM CategoryProductTable as cpt, CategoryProducts as cp, Products as p WHERE cp.CategoryProductId = cpt.CategoryProductRefId This will assign each category to every product.
How to get name of product from many to many even if it is not in the related table SQL or EF linq query
I want to get all product names with category if even product doesn't have a category get informatoin for creation from here public class Product { public int ProductId { get; set; } public string Name { get; set; } public virtual ICollection<CategoryProduct> CategoryProducts { get; set; } } public class CategoryProduct { public int CategoryProductId { get; set; } public string Name { get; set; } public virtual ICollection<Product> Products { get; set; } } internal class EFDbContext : DbContext, IDBProductContext { public DbSet<Product> Products { get; set; } public DbSet<CategoryProduct> CategoryProducts { get; set ; } public EFDbContext() { Database.SetInitializer<EFDbContext>(new DropCreateDatabaseIfModelChanges<EFDbContext>()); } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Product>().HasMany(p => p.CategoryProducts) .WithMany(c => c.Products) .Map(pc => { pc.MapLeftKey("ProductRefId"); pc.MapRightKey("CategoryProductRefId"); pc.ToTable("CategoryProductTable"); }); base.OnModelCreating(modelBuilder); } } If I write a SQL query like this, I get all of them from joined EF table SELECT p.Name, cp.Name FROM CategoryProductTable AS cpt, CategoryProducts AS cp, Products as p WHERE p.ProductId = cpt.ProductRefId AND cp.CategoryProductId = cpt.CategoryProductRefId but I want to get all from product names with category if even product doesn't have a category UPDATED: thanks for SQL solution @Nick Scotney, but now I would want know how it do it in Linq
[ "Could you be after a \"LEFT OUTER JOIN\" in your Sql?\nSELECT\n p.Name,\n cp.Name\nFROM\n Products p\n LEFT OUTER JOIN CategoryProductTable cpt ON p.ProductId = cpt.ProductRefId\n LEFT OUTER JOIN CategoryProducts cp ON cpt.CategoryProductRefId = cp.CategoryProductId\n\nIn the above SQL, everything from products will be selected, regardless of if there is a Category or not. When there isn't a category, cp.Name will simply return NULL.\n", "You would want to remove p.ProductId = cpt.ProductRefId predicate if you want to show all products.\nSELECT p.Name, cp.Name \nFROM CategoryProductTable as cpt, CategoryProducts as cp, Products as p\nWHERE cp.CategoryProductId = cpt.CategoryProductRefId\n\nThis will assign each category to every product.\n" ]
[ 2, 0 ]
[]
[]
[ "c#", "entity_framework", "sql" ]
stackoverflow_0074677696_c#_entity_framework_sql.txt
Q: Can not extract resource from com.android.aaptcompiler.ParsedResource@6997c081 New in coding here, I was making an app from course and everything seems to be fine until I wanted to run the app :c ` inner element must either be a resource reference or empty. Execution failed for task ':app:mergeDebugResources'. A failure occurred while executing com.android.build.gradle.internal.res.ResourceCompilerRunnable Resource compilation failed (Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml. Cause: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067.). Check logs for more details. *urce compilation failed (Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml. Cause: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067.). Check logs for more details. at com.android.aaptcompiler.ResourceCompiler.compileResource(ResourceCompiler.kt:129) at com.android.build.gradle.internal.res.ResourceCompilerRunnable$Companion.compileSingleResource(ResourceCompilerRunnable.kt:34) at com.android.build.gradle.internal.res.ResourceCompilerRunnable.run(ResourceCompilerRunnable.kt:15) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:97) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$2(DefaultWorkerExecutor.java:205) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:270) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:119) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:124) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) 2 more Caused by: com.android.aaptcompiler.ResourceCompilationException: Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml at com.android.aaptcompiler.ResourceCompiler.compileTable(ResourceCompiler.kt:192) at com.android.aaptcompiler.ResourceCompiler.access$compileTable(ResourceCompiler.kt:1) at com.android.aaptcompiler.ResourceCompiler$getCompileMethod$1.invoke(ResourceCompiler.kt:138) at com.android.aaptcompiler.ResourceCompiler$getCompileMethod$1.invoke(ResourceCompiler.kt:138) at com.android.aaptcompiler.ResourceCompiler.compileResource(ResourceCompiler.kt:123) 32 more Caused by: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067. at com.android.aaptcompiler.TableExtractor.extractResourceValues(TableExtractor.kt:270) at com.android.aaptcompiler.TableExtractor.extract(TableExtractor.kt:181) at com.android.aaptcompiler.ResourceCompiler.compileTable(ResourceCompiler.kt:188) 36 more ` Someone know how to fix it? :/ A: If you provide more information it causes that you will receive more helps. In the best of my knowledge: This error is usually caused by either a formatting issue in the XML file associated with the resource, or by an issue with the code that is attempting to extract the resource. To resolve this issue, you should first check the XML file associated with the resource to make sure that all the elements are properly formatted and that the syntax is correct. If there are any discrepancies, you should correct them. If the XML file is correct, then you should check the code that is attempting to extract the resource. Make sure that you are properly referencing the resource and that all of the parameters being passed are correct.
Can not extract resource from com.android.aaptcompiler.ParsedResource@6997c081
New in coding here, I was making an app from course and everything seems to be fine until I wanted to run the app :c ` inner element must either be a resource reference or empty. Execution failed for task ':app:mergeDebugResources'. A failure occurred while executing com.android.build.gradle.internal.res.ResourceCompilerRunnable Resource compilation failed (Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml. Cause: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067.). Check logs for more details. *urce compilation failed (Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml. Cause: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067.). Check logs for more details. at com.android.aaptcompiler.ResourceCompiler.compileResource(ResourceCompiler.kt:129) at com.android.build.gradle.internal.res.ResourceCompilerRunnable$Companion.compileSingleResource(ResourceCompilerRunnable.kt:34) at com.android.build.gradle.internal.res.ResourceCompilerRunnable.run(ResourceCompilerRunnable.kt:15) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:97) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$2(DefaultWorkerExecutor.java:205) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:270) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:119) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:124) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) 2 more Caused by: com.android.aaptcompiler.ResourceCompilationException: Failed to compile values resource file /Users/konradmichalski/AndroidStudioProjects/FirstApp/app/build/intermediates/incremental/debug/mergeDebugResources/merged.dir/values/values.xml at com.android.aaptcompiler.ResourceCompiler.compileTable(ResourceCompiler.kt:192) at com.android.aaptcompiler.ResourceCompiler.access$compileTable(ResourceCompiler.kt:1) at com.android.aaptcompiler.ResourceCompiler$getCompileMethod$1.invoke(ResourceCompiler.kt:138) at com.android.aaptcompiler.ResourceCompiler$getCompileMethod$1.invoke(ResourceCompiler.kt:138) at com.android.aaptcompiler.ResourceCompiler.compileResource(ResourceCompiler.kt:123) 32 more Caused by: java.lang.IllegalStateException: Can not extract resource from com.android.aaptcompiler.ParsedResource@3041067. at com.android.aaptcompiler.TableExtractor.extractResourceValues(TableExtractor.kt:270) at com.android.aaptcompiler.TableExtractor.extract(TableExtractor.kt:181) at com.android.aaptcompiler.ResourceCompiler.compileTable(ResourceCompiler.kt:188) 36 more ` Someone know how to fix it? :/
[ "If you provide more information it causes that you will receive more helps.\nIn the best of my knowledge:\nThis error is usually caused by either a formatting issue in the XML file associated with the resource, or by an issue with the code that is attempting to extract the resource.\nTo resolve this issue, you should first check the XML file associated with the resource to make sure that all the elements are properly formatted and that the syntax is correct. If there are any discrepancies, you should correct them.\nIf the XML file is correct, then you should check the code that is attempting to extract the resource. Make sure that you are properly referencing the resource and that all of the parameters being passed are correct.\n" ]
[ 0 ]
[]
[]
[ "kotlin" ]
stackoverflow_0074677557_kotlin.txt
Q: Tools for automating windows applications (preferably in Python)? I have a legacy Windows application which performs a critical business function. It has no API or official support for automation. This program requires a human to perform a sequence of actions in order to convert files in a particular input format into a PDF, from which we can scrape content and then process the data normally. The business literally cannot function without some calculations/reports that this software performs, but unfortunately, those calculations are poorly understood and we don't have the kind of R&D budget that would allow us to re-implement the software. The software reads in a proprietary file format and generates a number of PDF reports in an industry-approved format, from which we can scrape the images and deal with them in a more conventional way. It has been proposed that we wrap up the application inside some kind of API, where I might submit some input data into a queue, and somewhere deep within this, we automate the software as if a human user was driving it to perform the operations. Unfortunately, the operations are complex and depend on a number of inputs, and also the content of the file being processed. It's not the kind of thing that we could do with a simple macro - some logic will be required to model the behavior of a trained human operator. So are there any solutions to this? We'd like to be able to drive the software as quickly as possible, and since we have many Python developers it makes sense to implement as much as possible in Python. The outer layers of this system will also be in Python, so that could cut out the complexity. Are there any tools which already provide the bulk of this kind of behavior? A: You have multiple options: 1. winshell: A light wrapper around the Windows shell functionality 2. Automa: Utilty to automate repetitive and/or complex task 3: PyAutoGUI is a Python module for programmatically controlling the mouse and keyboard. 4. Sikuli automates anything you see on the screen http://www.sikuli.org/ 5. pure Python scripting. example below: import os os.system('notepad.exe') import win32api win32api.WinExec('notepad.exe') import subprocess subprocess.Popen(['notepad.exe']) A: The easiest approach to automating an application is to send keystrokes to it. If you can drive the target application by keystrokes alone, operating it becomes manageable without needing to fight screen resolutions, large fonts and mouse positions. [1] The harder part is recognizing the displayed state of the application. Ideally, you can read the content of the controls using Python [2], to at least detect error conditions and reset the program to a known good state. If resetting the program by conventional navigation fails, consider killing the target process and relaunch the process. [1] How to send simulated keyboard strokes to the active window using SendKeys [2] Problem when getting the content of a listbox with python and ctypes on win32 A: Try out Robotic automation tools, which can mimic or record human interactions with computer and repeat over time. It can be made for handling more complex tasks using scripts depends on that software. Example selecting different inputs, browser components and also windows application. A: Below is an example code using pywinauto. From my experience this solves a lot of issues, when we use any other tool, especially in the case of CI/CD. from pywinauto.application import Application def open_app(file_path = "notepad.exe"): app = Application().start(file_path) return app def select_menu(app_object = app.UntitledNotepad, menu_item = "Help->About Notepad"): app_object.menu_select(menu_item) def click_item(app_object = app.AboutNotepad.OK): app_object.click() def type_in(app_object = app.UntitledNotepad.Edit., data = "pywinauto Works!"): app_object.type_keys(data, with_spaces = True)
Tools for automating windows applications (preferably in Python)?
I have a legacy Windows application which performs a critical business function. It has no API or official support for automation. This program requires a human to perform a sequence of actions in order to convert files in a particular input format into a PDF, from which we can scrape content and then process the data normally. The business literally cannot function without some calculations/reports that this software performs, but unfortunately, those calculations are poorly understood and we don't have the kind of R&D budget that would allow us to re-implement the software. The software reads in a proprietary file format and generates a number of PDF reports in an industry-approved format, from which we can scrape the images and deal with them in a more conventional way. It has been proposed that we wrap up the application inside some kind of API, where I might submit some input data into a queue, and somewhere deep within this, we automate the software as if a human user was driving it to perform the operations. Unfortunately, the operations are complex and depend on a number of inputs, and also the content of the file being processed. It's not the kind of thing that we could do with a simple macro - some logic will be required to model the behavior of a trained human operator. So are there any solutions to this? We'd like to be able to drive the software as quickly as possible, and since we have many Python developers it makes sense to implement as much as possible in Python. The outer layers of this system will also be in Python, so that could cut out the complexity. Are there any tools which already provide the bulk of this kind of behavior?
[ "You have multiple options:\n1. winshell: A light wrapper around the Windows shell functionality\n2. Automa: Utilty to automate repetitive and/or complex task \n3: PyAutoGUI is a Python module for programmatically controlling the\nmouse and keyboard.\n4. Sikuli automates anything you see on the screen http://www.sikuli.org/\n5. pure Python scripting. example below: \n\n\nimport os os.system('notepad.exe')\nimport win32api\nwin32api.WinExec('notepad.exe')\n\nimport subprocess\nsubprocess.Popen(['notepad.exe'])\n\n", "The easiest approach to automating an application is to send keystrokes to it. If you can drive the target application by keystrokes alone, operating it becomes manageable without needing to fight screen resolutions, large fonts and mouse positions. [1]\nThe harder part is recognizing the displayed state of the application. Ideally, you can read the content of the controls using Python [2], to at least detect error conditions and reset the program to a known good state. If resetting the program by conventional navigation fails, consider killing the target process and relaunch the process.\n[1] How to send simulated keyboard strokes to the active window using SendKeys\n[2] Problem when getting the content of a listbox with python and ctypes on win32\n", "Try out Robotic automation tools, which can mimic or record human interactions with computer and repeat over time. It can be made for handling more complex tasks using scripts depends on that software. Example selecting different inputs, browser components and also windows application. \n", "Below is an example code using pywinauto. From my experience this solves a lot of issues, when we use any other tool, especially in the case of CI/CD.\nfrom pywinauto.application import Application\n\ndef open_app(file_path = \"notepad.exe\"):\n app = Application().start(file_path)\n return app\n\ndef select_menu(app_object = app.UntitledNotepad, menu_item = \"Help->About Notepad\"):\n app_object.menu_select(menu_item)\n\ndef click_item(app_object = app.AboutNotepad.OK):\n app_object.click()\n \ndef type_in(app_object = app.UntitledNotepad.Edit., data = \"pywinauto Works!\"):\n app_object.type_keys(data, with_spaces = True)\n\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "automation", "python", "windows" ]
stackoverflow_0052832504_automation_python_windows.txt
Q: How to use string parameter as template literal key type of return object As I couldn't come up with a valid syntax to solve my problem using TypeScript's template literal types, I will give a textual description of what I would like to achieve: Given a function fn with an input name: string, I would like to annotate the result to be an object of type {[`use${Capitalize<name>}`]: any} with one inferred property key. It should thus satisfy the following logic for an arbitrary name: function createTypedObject(name) { return {[`use${name.charAt(0).toUpperCase() + name.substring(1)}`]: "any"} } I am not sure whether this is at all possible to do for template literals where the interpolated variable is not known at the time of definition of the function. A: You need to use a mapped type like { [K in `use${Capitalize<T>}`]: any }, equivalent to Record<`use${Capitalize<T>}`, any> using the Record<K, V> utility type). For example: function createTypedObject<T extends string>( name: T ): { [K in `use${Capitalize<T>}`]: any } { return { [`use${name.charAt(0).toUpperCase() + name.substring(1)}`]: "any" } as any; // compiler not smart enough to verify this } Note that the implementation needs a type assertion because the compiler is unable to verify that the value returned conforms to the return type. I assume this is mostly out of scope for the question so I won't digress into the various ways the compiler fails to see this. Anyway, let's test it: const x = createTypedObject("hello"); // const x: { useHello: any; } console.log(x.useHello) // "any" Looks good. The compiler knows that x has a useHello property, as desired. Playground link to code
How to use string parameter as template literal key type of return object
As I couldn't come up with a valid syntax to solve my problem using TypeScript's template literal types, I will give a textual description of what I would like to achieve: Given a function fn with an input name: string, I would like to annotate the result to be an object of type {[`use${Capitalize<name>}`]: any} with one inferred property key. It should thus satisfy the following logic for an arbitrary name: function createTypedObject(name) { return {[`use${name.charAt(0).toUpperCase() + name.substring(1)}`]: "any"} } I am not sure whether this is at all possible to do for template literals where the interpolated variable is not known at the time of definition of the function.
[ "You need to use a mapped type like { [K in `use${Capitalize<T>}`]: any }, equivalent to Record<`use${Capitalize<T>}`, any> using the Record<K, V> utility type). For example:\nfunction createTypedObject<T extends string>(\n name: T\n): { [K in `use${Capitalize<T>}`]: any } {\n return {\n [`use${name.charAt(0).toUpperCase() + name.substring(1)}`]: \"any\"\n } as any; // compiler not smart enough to verify this\n}\n\nNote that the implementation needs a type assertion because the compiler is unable to verify that the value returned conforms to the return type. I assume this is mostly out of scope for the question so I won't digress into the various ways the compiler fails to see this.\nAnyway, let's test it:\nconst x = createTypedObject(\"hello\");\n// const x: { useHello: any; }\n\nconsole.log(x.useHello) // \"any\"\n\nLooks good. The compiler knows that x has a useHello property, as desired.\nPlayground link to code\n" ]
[ 1 ]
[]
[]
[ "typescript" ]
stackoverflow_0074675813_typescript.txt
Q: How to add a calculated value to a CollectionView after it has been populated I have the following CodeBehind and XAML which I used to get all data from an SQLite table and populate a CollectionView: .cs protected override void OnAppearing() { base.OnAppearing(); List<Record> records = App.RecordRepo.GetAllRecords(); recordList.ItemsSource = records; } .xaml <Grid Grid.Row="0"> <VerticalStackLayout> <Label x:Name="lblHoldingTotal" Text="Total"/> <Label x:Name="lblAverageBuyPrice" Text="Average Buy Price"/> <Label x:Name="lblTotalPaid" Text="Total Paid"/> <Label x:Name="lblTicker" Text="Ticker"/> <Label x:Name="lblHoldingValue" Text="Holding Value"/> <Label x:Name="lblProfit" Text="Profit"/> </VerticalStackLayout> </Grid> <Grid Grid.Row="1"> <CollectionView x:Name="recordList"> <CollectionView.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Label Grid.Column="0" Text="{Binding Id}" /> <Label Grid.Column="1" Text="{Binding Amount}" /> <Label Grid.Column="2" Text="{Binding Paid}" /> <Label Grid.Column="3" Text="P/L" /> <Label Grid.Column="4" Text="{Binding PurchaseDate}" /> </Grid> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> </Grid> How would I update the value in Column 3 (Which currently has P/L as a placeholder for all rows) based on a value from the CollectionView after populating it, and a value from a Label outside the CollectionView without using the MVVM framework? For example: (Column 3 label text) = (Column 2 Label text value) - lblTicker.text A: We could not change the Column3 label text in the Codebehind way as it is set in the template of the CollectionView. We also cannot access the label in codebehind by setting x:Name to it. Try using data bindings. There are several similar cases that you could refer to: How can I access a specific collectionview child? In this case, the label "DataCadastroLabel" and How to set a x:Name to a control in a CollectionView / ListView in Xamarin.Forms. A: How would I update the value in Column 3 Make a custom class, that you use as the ItemTemplate. Usage: <CollectionView.ItemTemplate> <DataTemplate> <mynamespace:MyItemView ... /> </DataTemplate> </CollectionView.ItemTemplate> The custom class'es XAML will be similar to what you have now inside DataTemplate, but with a header, like any other XAML file: <Grid xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="MyNameSpace.MyItemView"> ... <... x:Name="someElement" ... /> </Grid> In that class, you can do anything you need when BindingContext is set: public partial class MyItemView : Grid { ... protected override void OnBindingContextChanged() { base.OnBindingContextChanged(); // The "model" that this row is bound to. var item = (MyItemClass)BindingContext; // The UI element you want to set dynamically. someElement.SomeProperty = item....; ... } } Update a value from a Label outside the CollectionView In your page's code behind, its easy to set property values (that are not bound to items of a collection): lblTicker.Text = "Whatever text is needed". If you need something more than that, add "pseudo-code" (doesn't actually compile) to the question, that shows an example of what you are trying to do. A: To add a calculated value to a CollectionView after it has been populated, you can use a lambda expression to modify the values in the CollectionView's ItemsSource property. First, you will need to create a new property on the Record class to hold the calculated value. For example, you could add a Profit property: public class Record { public int Id { get; set; } public decimal Amount { get; set; } public decimal Paid { get; set; } public DateTime PurchaseDate { get; set; } // new property for the calculated value public decimal Profit { get; set; } } Then, you can modify the code that populates the CollectionView to calculate the value for each item in the ItemsSource and set the new property. In this example, the value is calculated by subtracting the value of the lblTicker Label from the Amount property of each Record item: // get the records from the database List<Record> records = App.RecordRepo.GetAllRecords(); // modify the records by setting the Profit property records = records.Select(r => { r.Profit = r.Amount - decimal.Parse(lblTicker.Text); return r; }).ToList(); // set the modified records as the items source for the CollectionView recordList.ItemsSource = records; Finally, you can bind the Label in the CollectionView's ItemTemplate to the new Profit property to display the calculated value: <DataTemplate> <Grid> <Grid.ColumnDefinitions> A: Here's one way you can do this without using the MVVM framework: First, add a new property to your Record class called Profit that calculates the profit value based on the Amount and Paid properties: public class Record { // Other properties public decimal Profit { get { return Amount - Paid; } } } Next, bind the Text property of the label in Column 3 to the Profit property of the Record object in that row: <Label Grid.Column="3" Text="{Binding Profit}" /> This will automatically update the text in Column 3 for each row based on the values of the Amount and Paid properties of the Record object. You can then use the lblTicker label outside the CollectionView to get the value of the Profit property for the selected row if needed. Alternatively, if you don't want to add a new property to the Record class, you can use a binding converter to calculate the profit value in the Label element in Column 3. A binding converter is a class that implements the IValueConverter interface and allows you to convert a value from one type to another in a binding expression. Here's an example of how you can use a converter to calculate the profit value in Column 3: First, create a new class called ProfitConverter that implements the IValueConverter interface: public class ProfitConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { // Calculate the profit value here } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } Next, add a new instance of the ProfitConverter class to your page's resources: <ContentPage.Resources> <ResourceDictionary> <local:ProfitConverter x:Key="profitConverter" /> </ResourceDictionary> </ContentPage.Resources> Then, use the Converter property of the Binding class to specify that the ProfitConverter class should be used to convert the Amount and Paid values in the binding expression for the Label element in Column 3: <Label Grid.Column="3" Text="{Binding Amount, Converter={StaticResource profitConverter}}" /> Inside the Convert method of the ProfitConverter class, you can calculate the profit value based on the Amount and Paid values and return it. For example: public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { decimal amount = (decimal)value; decimal paid = decimal.Parse(parameter.ToString()); decimal profit = amount - paid; return profit; } You can then pass the Paid value as the parameter argument to the Converter property I hope this helps! If it did consider donating BTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697 A: You could use code-behind to update the value in column 3. You could use the CollectionView's ItemAppearing event to update the value of column 3 for each item in the CollectionView. In the ItemAppearing event handler, you could get the data context for the item that is appearing and use it to update the value for column 3. You could then access the Label that is outside the CollectionView and use its value to update the value for column 3. Here is an example: private void recordList_ItemAppearing(object sender, ItemVisibilityEventArgs e) { var item = e.Item as Record; if(item != null) { //update the value for column 3 item.Profit = item.Paid - Convert.ToDouble(lblTicker.Text); } }
How to add a calculated value to a CollectionView after it has been populated
I have the following CodeBehind and XAML which I used to get all data from an SQLite table and populate a CollectionView: .cs protected override void OnAppearing() { base.OnAppearing(); List<Record> records = App.RecordRepo.GetAllRecords(); recordList.ItemsSource = records; } .xaml <Grid Grid.Row="0"> <VerticalStackLayout> <Label x:Name="lblHoldingTotal" Text="Total"/> <Label x:Name="lblAverageBuyPrice" Text="Average Buy Price"/> <Label x:Name="lblTotalPaid" Text="Total Paid"/> <Label x:Name="lblTicker" Text="Ticker"/> <Label x:Name="lblHoldingValue" Text="Holding Value"/> <Label x:Name="lblProfit" Text="Profit"/> </VerticalStackLayout> </Grid> <Grid Grid.Row="1"> <CollectionView x:Name="recordList"> <CollectionView.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Label Grid.Column="0" Text="{Binding Id}" /> <Label Grid.Column="1" Text="{Binding Amount}" /> <Label Grid.Column="2" Text="{Binding Paid}" /> <Label Grid.Column="3" Text="P/L" /> <Label Grid.Column="4" Text="{Binding PurchaseDate}" /> </Grid> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> </Grid> How would I update the value in Column 3 (Which currently has P/L as a placeholder for all rows) based on a value from the CollectionView after populating it, and a value from a Label outside the CollectionView without using the MVVM framework? For example: (Column 3 label text) = (Column 2 Label text value) - lblTicker.text
[ "We could not change the Column3 label text in the Codebehind way as it is set in the template of the CollectionView. We also cannot access the label in codebehind by setting x:Name to it. Try using data bindings. There are several similar cases that you could refer to: How can I access a specific collectionview child? In this case, the label \"DataCadastroLabel\" and\nHow to set a x:Name to a control in a CollectionView / ListView in Xamarin.Forms.\n", "How would I update the value in Column 3\nMake a custom class, that you use as the ItemTemplate.\nUsage:\n<CollectionView.ItemTemplate>\n <DataTemplate>\n <mynamespace:MyItemView ... />\n </DataTemplate>\n</CollectionView.ItemTemplate>\n\nThe custom class'es XAML will be similar to what you have now inside DataTemplate, but with a header, like any other XAML file:\n<Grid xmlns=\"http://xamarin.com/schemas/2014/forms\" \n xmlns:x=\"http://schemas.microsoft.com/winfx/2009/xaml\"\n x:Class=\"MyNameSpace.MyItemView\">\n ...\n <... x:Name=\"someElement\" ... />\n</Grid>\n\nIn that class, you can do anything you need when BindingContext is set:\npublic partial class MyItemView : Grid\n{\n ...\n \n protected override void OnBindingContextChanged()\n {\n base.OnBindingContextChanged();\n\n // The \"model\" that this row is bound to.\n var item = (MyItemClass)BindingContext;\n // The UI element you want to set dynamically.\n someElement.SomeProperty = item....;\n ...\n }\n}\n\n\nUpdate a value from a Label outside the CollectionView\nIn your page's code behind, its easy to set property values (that are not bound to items of a collection):\nlblTicker.Text = \"Whatever text is needed\".\n\nIf you need something more than that, add \"pseudo-code\" (doesn't actually compile) to the question, that shows an example of what you are trying to do.\n", "To add a calculated value to a CollectionView after it has been populated, you can use a lambda expression to modify the values in the CollectionView's ItemsSource property.\nFirst, you will need to create a new property on the Record class to hold the calculated value. For example, you could add a Profit property:\npublic class Record\n{\n public int Id { get; set; }\n public decimal Amount { get; set; }\n public decimal Paid { get; set; }\n public DateTime PurchaseDate { get; set; }\n\n // new property for the calculated value\n public decimal Profit { get; set; }\n}\n\n\nThen, you can modify the code that populates the CollectionView to calculate the value for each item in the ItemsSource and set the new property. In this example, the value is calculated by subtracting the value of the lblTicker Label from the Amount property of each Record item:\n// get the records from the database\nList<Record> records = App.RecordRepo.GetAllRecords();\n\n// modify the records by setting the Profit property\nrecords = records.Select(r => \n{\n r.Profit = r.Amount - decimal.Parse(lblTicker.Text);\n return r;\n}).ToList();\n\n// set the modified records as the items source for the CollectionView\nrecordList.ItemsSource = records;\n\n\nFinally, you can bind the Label in the CollectionView's ItemTemplate to the new Profit property to display the calculated value:\n<DataTemplate>\n <Grid>\n <Grid.ColumnDefinitions>\n \n\n", "Here's one way you can do this without using the MVVM framework:\nFirst, add a new property to your Record class called Profit that calculates the profit value based on the Amount and Paid properties:\npublic class Record\n{\n // Other properties\n\n public decimal Profit\n {\n get\n {\n return Amount - Paid;\n }\n }\n}\n\nNext, bind the Text property of the label in Column 3 to the Profit property of the Record object in that row:\n<Label Grid.Column=\"3\" Text=\"{Binding Profit}\" />\n\nThis will automatically update the text in Column 3 for each row based on the values of the Amount and Paid properties of the Record object. You can then use the lblTicker label outside the CollectionView to get the value of the Profit property for the selected row if needed.\nAlternatively, if you don't want to add a new property to the Record class, you can use a binding converter to calculate the profit value in the Label element in Column 3. A binding converter is a class that implements the IValueConverter interface and allows you to convert a value from one type to another in a binding expression. Here's an example of how you can use a converter to calculate the profit value in Column 3:\nFirst, create a new class called ProfitConverter that implements the IValueConverter interface:\npublic class ProfitConverter : IValueConverter\n{\n public object Convert(object value, Type targetType, object parameter, CultureInfo culture)\n {\n // Calculate the profit value here\n }\n\n public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)\n {\n throw new NotImplementedException();\n }\n}\n\nNext, add a new instance of the ProfitConverter class to your page's resources:\n<ContentPage.Resources>\n <ResourceDictionary>\n <local:ProfitConverter x:Key=\"profitConverter\" />\n </ResourceDictionary>\n</ContentPage.Resources>\n\nThen, use the Converter property of the Binding class to specify that the ProfitConverter class should be used to convert the Amount and Paid values in the binding expression for the Label element in Column 3:\n<Label Grid.Column=\"3\" Text=\"{Binding Amount, Converter={StaticResource profitConverter}}\" />\n\nInside the Convert method of the ProfitConverter class, you can calculate the profit value based on the Amount and Paid values and return it. For example:\npublic object Convert(object value, Type targetType, object parameter, CultureInfo culture)\n{\n decimal amount = (decimal)value;\n decimal paid = decimal.Parse(parameter.ToString());\n decimal profit = amount - paid;\n return profit;\n}\n\nYou can then pass the Paid value as the parameter argument to the Converter property\nI hope this helps! If it did consider donating\n\nBTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697\n\n", "You could use code-behind to update the value in column 3. You could use the CollectionView's ItemAppearing event to update the value of column 3 for each item in the CollectionView.\nIn the ItemAppearing event handler, you could get the data context for the item that is appearing and use it to update the value for column 3. You could then access the Label that is outside the CollectionView and use its value to update the value for column 3.\nHere is an example:\nprivate void recordList_ItemAppearing(object sender, ItemVisibilityEventArgs e) \n{ \n var item = e.Item as Record; \n if(item != null) \n { \n //update the value for column 3 \n item.Profit = item.Paid - Convert.ToDouble(lblTicker.Text); \n } \n}\n\n" ]
[ 2, 0, 0, 0, 0 ]
[]
[]
[ ".net_6.0", "maui", "xaml" ]
stackoverflow_0074585157_.net_6.0_maui_xaml.txt
Q: https clientside and http back end cookie is not sent I am new to web development. At first, I created an authentication system with http protocol for both client dev-server and backend dev-server, which worked properly. However, I had to make the client dev-server secure to implement HLS video player. Therefore, now client side url is (https://localhost:15173/login), and backend url (http://localhost:3000). When client side url is (http://localhost:15173/login), cookie was generated on server-side and sent to the client side. So, I would like to know why this is happening. Serverside: nodejs, express.js Client side: javascript, vue3.js Do I have to make both client side and backend https? Here is backend code to generate cookie: res.cookie('JWTcookie', accessToken, { httpOnly: true}) res.status(200).json(responseJson) Here is backend code to validate cookie: app.get("/login", function (req, res) { var JWTcookie = req.cookies.JWTcookie; console.log("JWT cookie is here", req.cookies.JWTcookie); try { console.log("veryfy token is here", verifyToken(JWTcookie)); const decoded = jwt.verify(JWTcookie, SECRET_KEY, function (err, decoded) { return decoded; }) const responseJson = { success: true, username: decoded.name, userID: decoded.id } res.status(200).json(responseJson); // console.log("decoded token ", decoded); } catch (err) { const status = 401 const message = 'Unauthorized' res.send("Not authorized. Better login"); // res.status(status).json({ status, message }) } }); Here is client side code (vue.js) to send cookie to the sererside: onMounted(() => { const API_URL = "http://localhost:3000/"; const authStore = userAuthStore(); axios.get(API_URL + "login", { withCredentials: true }).then(res => { if (res.data.success == true) { const id = res.data.userID; const username = res.data.username; authStore.auth(); authStore.setUser(id, username); console.log("mounted.") router.push("/video"); } else { console.log("Response is here: ", res.data) } }) }) I believe the problem is the lack of understanding of how security system work when one of them is https and the other is http. Any help would be appreciated. Thank you. I tried to make the cookie secure by adding: res.cookie('JWTcookie', accessToken, { httpOnly: true, secure: true}) But this didn't work. A: When I gave both server side and client side https, it worked.
https clientside and http back end cookie is not sent
I am new to web development. At first, I created an authentication system with http protocol for both client dev-server and backend dev-server, which worked properly. However, I had to make the client dev-server secure to implement HLS video player. Therefore, now client side url is (https://localhost:15173/login), and backend url (http://localhost:3000). When client side url is (http://localhost:15173/login), cookie was generated on server-side and sent to the client side. So, I would like to know why this is happening. Serverside: nodejs, express.js Client side: javascript, vue3.js Do I have to make both client side and backend https? Here is backend code to generate cookie: res.cookie('JWTcookie', accessToken, { httpOnly: true}) res.status(200).json(responseJson) Here is backend code to validate cookie: app.get("/login", function (req, res) { var JWTcookie = req.cookies.JWTcookie; console.log("JWT cookie is here", req.cookies.JWTcookie); try { console.log("veryfy token is here", verifyToken(JWTcookie)); const decoded = jwt.verify(JWTcookie, SECRET_KEY, function (err, decoded) { return decoded; }) const responseJson = { success: true, username: decoded.name, userID: decoded.id } res.status(200).json(responseJson); // console.log("decoded token ", decoded); } catch (err) { const status = 401 const message = 'Unauthorized' res.send("Not authorized. Better login"); // res.status(status).json({ status, message }) } }); Here is client side code (vue.js) to send cookie to the sererside: onMounted(() => { const API_URL = "http://localhost:3000/"; const authStore = userAuthStore(); axios.get(API_URL + "login", { withCredentials: true }).then(res => { if (res.data.success == true) { const id = res.data.userID; const username = res.data.username; authStore.auth(); authStore.setUser(id, username); console.log("mounted.") router.push("/video"); } else { console.log("Response is here: ", res.data) } }) }) I believe the problem is the lack of understanding of how security system work when one of them is https and the other is http. Any help would be appreciated. Thank you. I tried to make the cookie secure by adding: res.cookie('JWTcookie', accessToken, { httpOnly: true, secure: true}) But this didn't work.
[ "When I gave both server side and client side https, it worked.\n" ]
[ 0 ]
[]
[]
[ "authentication", "cookies", "https" ]
stackoverflow_0074673554_authentication_cookies_https.txt
Q: How do I access an **EXISTING** sheet in a workbook using `openxlsx` in R? Question If I want to manipulate a worksheet in an EXISTING workbook using R and the openxlsx package, how do I assign that to a variable for use in script? Example It's easy to do this (and well documented) when you are creating a workbook from scratch: library(openxlsx) f <- "Excel Output/Example.xlsx" df <- data.frame("ColA" = c("A", "B", "C"), "ColB" = c(1L, 4L, 9L)) wb <- createWorkbook() sh <- addWorksheet(wb, sheetName = "MyExampleSheet") # Now go do stuff using the `sh` variable like... writeData(wb, sh, df) saveWorkbook(wb, file = f, overwrite = TRUE) But now let's say I'm not creating a workbook from scratch. I'm using an existing workbook with sheets pre-existing, and I want to write new data to those existing sheets: wb <- loadWorkbook(f) # Loading, not creating! sh <- addWorksheet(wb, sheetName = "MyExampleSheet") #> Error in addWorksheet(wb, sheetName = "MyExampleSheet") : #> A worksheet by the name 'MyExampleSheet' already exists! Sheet names must be unique case-insensitive. Obviously addWorksheet() is the wrong function, but I cannot figure out how to get sh properly assigned to an existing worksheet. A: In your example you don't need sh. Worksheets are accessed via sheet name only. library(openxlsx) # creating data df <- data.frame("ColA" = c("A", "B", "C"), "ColB" = c(1L, 4L, 9L)) # create workbook from scratch wb <- createWorkbook() addWorksheet(wb, sheetName = "MyExampleSheet") writeData(wb, "MyExampleSheet", df) # loading workbook and writing onto it f <- system.file("extdata", "loadExample.xlsx", package = "openxlsx") wb <- loadWorkbook(file = f) writeData(wb, "testing", df)
How do I access an **EXISTING** sheet in a workbook using `openxlsx` in R?
Question If I want to manipulate a worksheet in an EXISTING workbook using R and the openxlsx package, how do I assign that to a variable for use in script? Example It's easy to do this (and well documented) when you are creating a workbook from scratch: library(openxlsx) f <- "Excel Output/Example.xlsx" df <- data.frame("ColA" = c("A", "B", "C"), "ColB" = c(1L, 4L, 9L)) wb <- createWorkbook() sh <- addWorksheet(wb, sheetName = "MyExampleSheet") # Now go do stuff using the `sh` variable like... writeData(wb, sh, df) saveWorkbook(wb, file = f, overwrite = TRUE) But now let's say I'm not creating a workbook from scratch. I'm using an existing workbook with sheets pre-existing, and I want to write new data to those existing sheets: wb <- loadWorkbook(f) # Loading, not creating! sh <- addWorksheet(wb, sheetName = "MyExampleSheet") #> Error in addWorksheet(wb, sheetName = "MyExampleSheet") : #> A worksheet by the name 'MyExampleSheet' already exists! Sheet names must be unique case-insensitive. Obviously addWorksheet() is the wrong function, but I cannot figure out how to get sh properly assigned to an existing worksheet.
[ "In your example you don't need sh. Worksheets are accessed via sheet name only.\nlibrary(openxlsx)\n\n# creating data\ndf <- data.frame(\"ColA\" = c(\"A\", \"B\", \"C\"),\n \"ColB\" = c(1L, 4L, 9L))\n\n# create workbook from scratch\nwb <- createWorkbook()\naddWorksheet(wb, sheetName = \"MyExampleSheet\")\nwriteData(wb, \"MyExampleSheet\", df)\n\n# loading workbook and writing onto it\nf <- system.file(\"extdata\", \"loadExample.xlsx\", package = \"openxlsx\")\n\nwb <- loadWorkbook(file = f)\nwriteData(wb, \"testing\", df)\n\n" ]
[ 0 ]
[]
[]
[ "excel", "openxlsx", "r" ]
stackoverflow_0074671411_excel_openxlsx_r.txt
Q: Is there a way to generate a GraphQL schema from a protobuf? I have a rather large/complex protobuf definition of an API, and I wonder if there's a convenient tool to automatically generate a textual GraphQL schema and its (nested) types from a subset of this protobuf? I'm using Node.js normally, but I'm open for other languages to generate the schema. A: If you're willing to use GoLang, there's a protobuf to GraphQL converter at https://github.com/opsee/protobuf Current gadgets graphql The graphql gadget will generate a graphql schema for protobuf messages for use with the go-graphql package. It defines extensions for use in files, messages, and fields. See examples for usage -- the files generated by the plugin are flavortown.pb.go and flavortownpb_test.go. A: Rejoiner will create a uniform GraphQL schema from protobuf sources (such as gRPC microservices). http://rejoiner.io/ https://github.com/google/rejoiner A: This project generates graphql schemas from protos: https://github.com/tmc/protoc-gen-graphql A: GraphQL Mesh support gRPC and Protobufs together with many other protocols. Here you can find docs and examples
Is there a way to generate a GraphQL schema from a protobuf?
I have a rather large/complex protobuf definition of an API, and I wonder if there's a convenient tool to automatically generate a textual GraphQL schema and its (nested) types from a subset of this protobuf? I'm using Node.js normally, but I'm open for other languages to generate the schema.
[ "If you're willing to use GoLang, there's a protobuf to GraphQL converter at\nhttps://github.com/opsee/protobuf\n\nCurrent gadgets\n graphql\nThe graphql gadget will generate a graphql schema for protobuf messages for use with the go-graphql package. It defines extensions for use in files, messages, and fields. See examples for usage -- the files generated by the plugin are flavortown.pb.go and flavortownpb_test.go.\n\n", "Rejoiner will create a uniform GraphQL schema from protobuf sources (such as gRPC microservices). \nhttp://rejoiner.io/\nhttps://github.com/google/rejoiner\n", "This project generates graphql schemas from protos: https://github.com/tmc/protoc-gen-graphql\n", "GraphQL Mesh support gRPC and Protobufs together with many other protocols.\nHere you can find docs and examples\n" ]
[ 6, 5, 0, 0 ]
[]
[]
[ "graphql", "graphql_js", "javascript", "node.js", "protocol_buffers" ]
stackoverflow_0044397956_graphql_graphql_js_javascript_node.js_protocol_buffers.txt
Q: TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for `optimizer_experimental.Optimizer` Please help me Code: from tensorflow.keras.models import load_model model_path = "model/classifier.h5" model = load_model(model_path) Error: TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for optimizer_experimental.Optimizer. A: It's possible that the model was trained using a different version of tf. Try confirming if the environment where you trained and are loading have the same tf version.
TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for `optimizer_experimental.Optimizer`
Please help me Code: from tensorflow.keras.models import load_model model_path = "model/classifier.h5" model = load_model(model_path) Error: TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for optimizer_experimental.Optimizer.
[ "It's possible that the model was trained using a different version of tf.\nTry confirming if the environment where you trained and are loading have the same tf version.\n" ]
[ 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0073538911_keras_python_tensorflow.txt
Q: jetpack compose: scroll to bottom listener (end of list) I am wondering if it is possible to get observer inside a @Compose function when the bottom of the list is reached (similar to recyclerView.canScrollVertically(1)) Thanks in advance. A: you can use rememberLazyListState() and compare scrollState.layoutInfo.visibleItemsInfo.lastOrNull()?.index == scrollState.layoutInfo.totalItemsCount - 1 How to use example: First add the above command as an extension (e.g., extensions.kt file): fun LazyListState.isScrolledToEnd() = layoutInfo.visibleItemsInfo.lastOrNull()?.index == layoutInfo.totalItemsCount - 1 Then use it in the following code: @Compose fun PostsList() { val scrollState = rememberLazyListState() LazyColumn( state = scrollState,), ) { ... } // observer when reached end of list val endOfListReached by remember { derivedStateOf { scrollState.isScrolledToEnd() } } // act when end of list reached LaunchedEffect(endOfListReached) { // do your stuff } } A: For me the best and the simplest solution was to add LaunchedEffect as the last item in my LazyColumn: LazyColumn(modifier = Modifier.fillMaxSize()) { items(someItemList) { item -> MyItem(item = item) } item { LaunchedEffect(true) { //Do something when List end has been reached } } } A: I think, based on the other answer, that the best interpretation of recyclerView.canScrollVertically(1) referred to bottom scrolling is fun LazyListState.isScrolledToTheEnd() : Boolean { val lastItem = layoutInfo.visibleItemsInfo.lastOrNull() return lastItem == null || lastItem.size + lastItem.offset <= layoutInfo.viewportEndOffset } A: Simply use the firstVisibleItemIndex and compare it to your last index. If it matches, you're at the end, else not. Use it as lazyListState.firstVisibleItemIndex A: Found a much simplier solution than other answers. Get the last item index of list. Inside itemsIndexed of lazyColumn compare it to lastIndex. When the end of list is reached it triggers if statement. Code example: LazyColumn( modifier = Modifier .fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally ) { itemsIndexed(events) { i, event -> if (lastIndex == i) { Log.e("console log", "end of list reached $lastIndex") } } } A: The LazyListState#layoutInfo contains information about the visible items. Note the you should use derivedStateOf to avoid redundant recompositions. Use something: @Composable private fun LazyListState.isAtBottom(): Boolean { return remember(this) { derivedStateOf { val visibleItemsInfo = layoutInfo.visibleItemsInfo if (layoutInfo.totalItemsCount == 0) { false } else { val lastVisibleItem = visibleItemsInfo.last() val viewportHeight = layoutInfo.viewportEndOffset + layoutInfo.viewportStartOffset (lastVisibleItem.index + 1 == layoutInfo.totalItemsCount && lastVisibleItem.offset + lastVisibleItem.size <= viewportHeight) } } }.value } The code above checks not only it the last visibile item == last index in the list but also if it is fully visible (lastVisibleItem.offset + lastVisibleItem.size <= viewportHeight). And then: val state = rememberLazyListState() var isAtBottom = state.isAtBottom() LaunchedEffect(isAtBottom){ if (isAtBottom) doSomething() } LazyColumn( state = state, ){ //... }
jetpack compose: scroll to bottom listener (end of list)
I am wondering if it is possible to get observer inside a @Compose function when the bottom of the list is reached (similar to recyclerView.canScrollVertically(1)) Thanks in advance.
[ "you can use rememberLazyListState() and compare\nscrollState.layoutInfo.visibleItemsInfo.lastOrNull()?.index == scrollState.layoutInfo.totalItemsCount - 1\n\nHow to use example:\nFirst add the above command as an extension (e.g., extensions.kt file):\nfun LazyListState.isScrolledToEnd() = layoutInfo.visibleItemsInfo.lastOrNull()?.index == layoutInfo.totalItemsCount - 1\n\nThen use it in the following code:\n@Compose\nfun PostsList() {\n val scrollState = rememberLazyListState()\n\n LazyColumn(\n state = scrollState,),\n ) {\n ...\n }\n\n // observer when reached end of list\n val endOfListReached by remember {\n derivedStateOf {\n scrollState.isScrolledToEnd()\n }\n }\n\n // act when end of list reached\n LaunchedEffect(endOfListReached) {\n // do your stuff\n }\n}\n\n\n", "For me the best and the simplest solution was to add LaunchedEffect as the last item in my LazyColumn:\nLazyColumn(modifier = Modifier.fillMaxSize()) {\n items(someItemList) { item ->\n MyItem(item = item)\n }\n item {\n LaunchedEffect(true) {\n //Do something when List end has been reached\n }\n }\n}\n\n", "I think, based on the other answer, that the best interpretation of recyclerView.canScrollVertically(1) referred to bottom scrolling is\nfun LazyListState.isScrolledToTheEnd() : Boolean {\n val lastItem = layoutInfo.visibleItemsInfo.lastOrNull()\n return lastItem == null || lastItem.size + lastItem.offset <= layoutInfo.viewportEndOffset\n}\n\n", "Simply use the firstVisibleItemIndex and compare it to your last index. If it matches, you're at the end, else not. Use it as lazyListState.firstVisibleItemIndex\n", "Found a much simplier solution than other answers. Get the last item index of list. Inside itemsIndexed of lazyColumn compare it to lastIndex. When the end of list is reached it triggers if statement. Code example:\nLazyColumn(\n modifier = Modifier\n .fillMaxSize(),\n horizontalAlignment = Alignment.CenterHorizontally\n ) {\n itemsIndexed(events) { i, event ->\n if (lastIndex == i) {\n Log.e(\"console log\", \"end of list reached $lastIndex\")\n }\n }\n }\n\n", "The LazyListState#layoutInfo contains information about the visible items. Note the you should use derivedStateOf to avoid redundant recompositions.\nUse something:\n@Composable\nprivate fun LazyListState.isAtBottom(): Boolean {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n false\n } else {\n val lastVisibleItem = visibleItemsInfo.last()\n val viewportHeight = layoutInfo.viewportEndOffset + layoutInfo.viewportStartOffset\n\n (lastVisibleItem.index + 1 == layoutInfo.totalItemsCount &&\n lastVisibleItem.offset + lastVisibleItem.size <= viewportHeight)\n }\n }\n }.value\n}\n\nThe code above checks not only it the last visibile item == last index in the list but also if it is fully visible (lastVisibleItem.offset + lastVisibleItem.size <= viewportHeight).\nAnd then:\nval state = rememberLazyListState()\nvar isAtBottom = state.isAtBottom()\nLaunchedEffect(isAtBottom){\n if (isAtBottom) doSomething()\n}\n\nLazyColumn(\n state = state,\n){\n //...\n}\n\n" ]
[ 11, 9, 3, 0, 0, 0 ]
[]
[]
[ "android_jetpack_compose" ]
stackoverflow_0068924018_android_jetpack_compose.txt
Q: Modulus not working correctly with Math.log10(n) Math.log10(8)/Math.log10(2) // this is equal to exactly 3 However, when using the modulo operator, sth doesn't add up! Math.log10(8)%Math.log10(2) //this is not equal to zero(0). I was expecting the modulo operator to equal to zero Please explain this phenomenon and a way to find reminders for natural logs. Thanks A: You are encountering rounding error which inherently happens in floating point arithmetic. Math.log10(8) in JavaScript evaluates to 0.9030899869919435. On WolframAlpha log10(8) is evaluated to 0.9030899869919435856412166841734790803045696443863256239312823833. These are different numbers. Neither is wrong, but one is more precise than the other. When doing math with imprecise decimal approximations of numbers like you are here, you will get imprecise results. This is especially pronounced when you try do modulo operations on such imprecise approximations. For example 8 % 2 and 8 % 2.000000001 give very different results. The former is 0 and the latter is just shy of 2. You can use a library like math.js which can avoid floating point math in these sorts of calculations, and where math.mod(math.log(8, 10), math.log(2, 10)) does indeed evaluate to 0.
Modulus not working correctly with Math.log10(n)
Math.log10(8)/Math.log10(2) // this is equal to exactly 3 However, when using the modulo operator, sth doesn't add up! Math.log10(8)%Math.log10(2) //this is not equal to zero(0). I was expecting the modulo operator to equal to zero Please explain this phenomenon and a way to find reminders for natural logs. Thanks
[ "You are encountering rounding error which inherently happens in floating point arithmetic.\nMath.log10(8) in JavaScript evaluates to 0.9030899869919435.\nOn WolframAlpha log10(8) is evaluated to 0.9030899869919435856412166841734790803045696443863256239312823833.\nThese are different numbers. Neither is wrong, but one is more precise than the other.\nWhen doing math with imprecise decimal approximations of numbers like you are here, you will get imprecise results. This is especially pronounced when you try do modulo operations on such imprecise approximations. For example 8 % 2 and 8 % 2.000000001 give very different results. The former is 0 and the latter is just shy of 2.\nYou can use a library like math.js which can avoid floating point math in these sorts of calculations, and where math.mod(math.log(8, 10), math.log(2, 10)) does indeed evaluate to 0.\n" ]
[ 0 ]
[]
[]
[ "javascript", "logarithm", "math", "modulo", "reactjs" ]
stackoverflow_0074677710_javascript_logarithm_math_modulo_reactjs.txt
Q: git extension does not show in the side bar I don't see the git extension icon in side bar. A: It won't appear as its own icon, it will use the source control icon. See my annotation below. Your icon may differ but should be similar. A: This is what worked for me: Press F1 and type "Source Control", then select any option. This should show your extension again.
git extension does not show in the side bar
I don't see the git extension icon in side bar.
[ "It won't appear as its own icon, it will use the source control icon. See my annotation below.\nYour icon may differ but should be similar.\n\n", "This is what worked for me:\nPress F1 and type \"Source Control\", then select any option. This should show your extension again.\n" ]
[ 1, 0 ]
[]
[]
[ "visual_studio_code" ]
stackoverflow_0068066165_visual_studio_code.txt
Q: How do I "assign" the value in `CArray` that contains a memory address to a `Pointer`? This is a NativeCall question. I have 8 bytes (little endian) in a CArray representing a memory address. How do I create a Pointer out it? (CArray and Pointer are two of NativeCall's C compatible types. Pointers are 8 bytes long. Things should line up, but how does one put the pointer address in a CArray into a Pointer in a way acceptable to NativeCall?) A: Here is an example of using the windows api call WTSEnumerateSessionsA() you mentioned in the comments: use NativeCall; constant BYTE := uint8; constant DWORD := uint32; constant WTS_CURRENT_SERVER_HANDLE = 0; # Current (local) server constant LPSTR := Str; enum WTS_CONNECTSTATE_CLASS ( WTSActive => 0, WTSConnected =>1, WTSConnectQuery => 2, WTSShadow => 3, WTSDisconnected => 4, WTSIdle => 5, WTSListen => 6, WTSReset => 7, WTSDown => 8, WTSInit => 9 ); constant WTS_CONNECTSTATE_CLASS_int := int32; class WTS_SESSION_INFOA is repr('CStruct') { has DWORD $.SessionId is rw; has LPSTR $.pWinStationName is rw; has WTS_CONNECTSTATE_CLASS_int $.State; } sub GetWTSEnumerateSession( #`{ C++ BOOL WTSEnumerateSessionsA( [in] HANDLE hServer, [in] DWORD Reserved, [in] DWORD Version, [out] PWTS_SESSION_INFOA *ppSessionInfo, [out] DWORD *pCount ); Returns zero if this function fails. } DWORD $hServer, # [in] HANDLE DWORD $Reserved, # [in] always 0 DWORD $Version, # [in] always 1 Pointer[Pointer] $ppSessionInf is rw, # [out] see _WTS_SESSION_INFOA and _WTS_CONNECTSTATE_CLASS; DWORD $pCount is rw # [out] DWORD ) is native("Wtsapi32.dll") is symbol("WTSEnumerateSessionsA") returns DWORD # If the function fails, the return value is zero. { * }; my $hServer = Pointer[void].new(); $hServer = WTS_CURRENT_SERVER_HANDLE; # let windows figure out what current handle is my $ppSession = Pointer[Pointer].new(); # A pointer to another pointer to an array of WTS_SESSION_INFO my DWORD $pCount; my $ret-code = GetWTSEnumerateSession $hServer, 0, 1, $ppSession, $pCount; say "Return code: " ~ $ret-code; my $array = nativecast(Pointer[WTS_SESSION_INFOA], $ppSession.deref); say "Number of session info structs: " ~ $pCount; for 0..^ $pCount -> $i { say "{$i} : Session id: " ~ $array[$i].SessionId; say "{$i} : Station name: " ~ $array[$i].pWinStationName; say "{$i} : Connection state: " ~ $array[$i].State; } Output (windows 11) Return code: 1 Number of session info structs: 2 0 : Session id: 0 0 : Station name: Services 0 : Connection state: 4 1 : Session id: 1 1 : Station name: Console 1 : Connection state: 0 A: How do I create a Pointer out it? I think you could use nativecast like this: my $ptr = nativecast(CArray[Pointer], $array); my Pointer $ptr2 = $ptr[$idx]; A: From the comments it seems like the native sub should return a pointer to an array of struct. On linux I created the following example: test.c #include <stdio.h> #include <stdint.h> #include <stdlib.h> typedef struct myStruct { int32_t A; double B; } mstruct; void set_pointer(mstruct **arrayOfStruct) { mstruct *ptr = (mstruct *) malloc(sizeof(mstruct)*2); printf("allocated memory at address: %p\n", ptr); ptr[0].A = 10; ptr[0].B = 1.1; ptr[1].A = 20; ptr[1].B = 2.1; *arrayOfStruct = ptr; } p.raku: use v6; use NativeCall; class myStruct is repr('CStruct') { has int32 $.A is rw; has num64 $.B is rw; } sub set_pointer(Pointer[CArray[myStruct]] is rw) is native("./libtest.so") { * }; my $array-ptr = Pointer[CArray[myStruct]].new(); set_pointer($array-ptr); my $array = nativecast(Pointer[myStruct], $array-ptr.deref); say $array[0].A; say $array[0].B; say $array[1].A; say $array[1].B; Output: allocated memory at address: 0x5579f0d95ef0 10 1.1 20 2.1
How do I "assign" the value in `CArray` that contains a memory address to a `Pointer`?
This is a NativeCall question. I have 8 bytes (little endian) in a CArray representing a memory address. How do I create a Pointer out it? (CArray and Pointer are two of NativeCall's C compatible types. Pointers are 8 bytes long. Things should line up, but how does one put the pointer address in a CArray into a Pointer in a way acceptable to NativeCall?)
[ "Here is an example of using the windows api call WTSEnumerateSessionsA() you mentioned in the comments:\nuse NativeCall;\n\nconstant BYTE := uint8;\nconstant DWORD := uint32;\nconstant WTS_CURRENT_SERVER_HANDLE = 0; # Current (local) server\nconstant LPSTR := Str;\n\nenum WTS_CONNECTSTATE_CLASS (\n WTSActive => 0,\n WTSConnected =>1,\n WTSConnectQuery => 2,\n WTSShadow => 3,\n WTSDisconnected => 4,\n WTSIdle => 5,\n WTSListen => 6,\n WTSReset => 7,\n WTSDown => 8,\n WTSInit => 9\n);\n\nconstant WTS_CONNECTSTATE_CLASS_int := int32;\n\nclass WTS_SESSION_INFOA is repr('CStruct') {\n has DWORD $.SessionId is rw;\n has LPSTR $.pWinStationName is rw;\n has WTS_CONNECTSTATE_CLASS_int $.State;\n}\n\nsub GetWTSEnumerateSession(\n #`{\n C++\n BOOL WTSEnumerateSessionsA(\n [in] HANDLE hServer,\n [in] DWORD Reserved,\n [in] DWORD Version,\n [out] PWTS_SESSION_INFOA *ppSessionInfo,\n [out] DWORD *pCount\n );\n Returns zero if this function fails.\n }\n DWORD $hServer, # [in] HANDLE\n DWORD $Reserved, # [in] always 0\n DWORD $Version, # [in] always 1\n Pointer[Pointer] $ppSessionInf is rw, # [out] see _WTS_SESSION_INFOA and _WTS_CONNECTSTATE_CLASS;\n DWORD $pCount is rw # [out] DWORD\n )\n is native(\"Wtsapi32.dll\")\n is symbol(\"WTSEnumerateSessionsA\")\n returns DWORD # If the function fails, the return value is zero.\n { * };\n\n\nmy $hServer = Pointer[void].new();\n$hServer = WTS_CURRENT_SERVER_HANDLE; # let windows figure out what current handle is\nmy $ppSession = Pointer[Pointer].new(); # A pointer to another pointer to an array of WTS_SESSION_INFO\nmy DWORD $pCount;\n\nmy $ret-code = GetWTSEnumerateSession $hServer, 0, 1, $ppSession, $pCount;\nsay \"Return code: \" ~ $ret-code;\nmy $array = nativecast(Pointer[WTS_SESSION_INFOA], $ppSession.deref);\nsay \"Number of session info structs: \" ~ $pCount;\nfor 0..^ $pCount -> $i {\n say \"{$i} : Session id: \" ~ $array[$i].SessionId;\n say \"{$i} : Station name: \" ~ $array[$i].pWinStationName;\n say \"{$i} : Connection state: \" ~ $array[$i].State;\n}\n\nOutput (windows 11)\nReturn code: 1\nNumber of session info structs: 2\n0 : Session id: 0\n0 : Station name: Services\n0 : Connection state: 4\n1 : Session id: 1\n1 : Station name: Console\n1 : Connection state: 0\n\n", "\nHow do I create a Pointer out it?\n\nI think you could use nativecast like this:\nmy $ptr = nativecast(CArray[Pointer], $array);\nmy Pointer $ptr2 = $ptr[$idx];\n\n", "From the comments it seems like the native sub should return a pointer to an array of struct. On linux I created the following example:\ntest.c\n#include <stdio.h>\n#include <stdint.h>\n#include <stdlib.h>\n\ntypedef struct myStruct\n{\n int32_t A;\n double B;\n} mstruct;\n\nvoid set_pointer(mstruct **arrayOfStruct)\n{\n mstruct *ptr = (mstruct *) malloc(sizeof(mstruct)*2);\n printf(\"allocated memory at address: %p\\n\", ptr);\n ptr[0].A = 10;\n ptr[0].B = 1.1;\n ptr[1].A = 20;\n ptr[1].B = 2.1;\n *arrayOfStruct = ptr;\n}\n\np.raku:\nuse v6;\nuse NativeCall;\n\nclass myStruct is repr('CStruct') {\n has int32 $.A is rw;\n has num64 $.B is rw;\n}\n\nsub set_pointer(Pointer[CArray[myStruct]] is rw) is native(\"./libtest.so\") { * };\n\nmy $array-ptr = Pointer[CArray[myStruct]].new();\nset_pointer($array-ptr);\nmy $array = nativecast(Pointer[myStruct], $array-ptr.deref);\nsay $array[0].A;\nsay $array[0].B;\nsay $array[1].A;\nsay $array[1].B;\n\nOutput:\nallocated memory at address: 0x5579f0d95ef0\n10\n1.1\n20\n2.1\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "c", "nativecall", "pointers", "raku" ]
stackoverflow_0074665162_c_nativecall_pointers_raku.txt
Q: POST 400 Firebase API i am trying to make a Signup page using React and Firebase. \AuthContacs.js ` function signup(email, password) { return auth.createUserWithEmailAndPassword(email, password) } \\Signup.js await signup(emailRef.current.value, passwordRef.current.value) ` and i keep getting this error index.ts:118 POST https://identitytoolkit.googleapis.com/v1/accounts:signUp?key=process.env.REACT_APP_FIREBASE_API_KEY 400 i tried to make sure i have the Correct API and correct format but no chance A: From the inclusion of this in the error message: key=process.env.REACT_APP_FIREBASE_API_KEY It looks like the process isn't actually reading the environment variable for the REACT_APP_FIREBASE_API_KEY, and end up including the key rather than its value in the URL. It's hard to say why this happens from the information you shared, but you'll want to see how your environment variable are read, and whether they show the correct value when you log them.
POST 400 Firebase API
i am trying to make a Signup page using React and Firebase. \AuthContacs.js ` function signup(email, password) { return auth.createUserWithEmailAndPassword(email, password) } \\Signup.js await signup(emailRef.current.value, passwordRef.current.value) ` and i keep getting this error index.ts:118 POST https://identitytoolkit.googleapis.com/v1/accounts:signUp?key=process.env.REACT_APP_FIREBASE_API_KEY 400 i tried to make sure i have the Correct API and correct format but no chance
[ "From the inclusion of this in the error message:\nkey=process.env.REACT_APP_FIREBASE_API_KEY\n\nIt looks like the process isn't actually reading the environment variable for the REACT_APP_FIREBASE_API_KEY, and end up including the key rather than its value in the URL.\nIt's hard to say why this happens from the information you shared, but you'll want to see how your environment variable are read, and whether they show the correct value when you log them.\n" ]
[ 0 ]
[]
[]
[ "authentication", "firebase", "http_status_code_400", "post", "reactjs" ]
stackoverflow_0074677170_authentication_firebase_http_status_code_400_post_reactjs.txt
Q: Use of '#' in unexpected way There's a macro defined as: #define SET_ARRAY(field, type) \ foo.field = bar[#field].data<type>(); foo is a structure with members that are of type int or float *. bar is of type cnpy::npz_t (data loaded from .npz file). I understand that the macro is setting the structure member pointer so that it is pointing to the corresponding data in bar from the .npy file contained in the .npz file, but I'm wondering about the usage bar[#field]. When I ran the code through the preprocessor, I get: foo.struct_member_name = bar["struct_member_name"].data<float>(); but I've never seen that type of usage either. It looks like the struct member variable name is somehow getting converted to an array index or memory offset that resolves to the data within the cnpy::npz_t structure. Can anyone explain how that is happening? A: # is actually a preprocessor marker. That means preprocessor commands (not functions), formally called "preprocessor directives", are being executed at compile time. Apart from commands, you'll also find something akin to constants (meaning they have predefined values, either static or dynamic - yes I used the term constants loosely, but I am oversimplifying this right now), but they aren't constants "in that way", they just seem like that to us. A number of preprocessor commands that you will find are: #define, #include, #undef, #if (yes, different from the normal "if" in code), #elif, #endif, #error - all those must be prefixed by a "#". Some values might be the __FILE__, __LINE__, __cplusplus and more. These are not prefixed by #, but can be used in preprocessor macros. The values are dynamically set by the compiler, depending on context. For more information on macros, you can check the MS Learn page for MSVS or the GNU page for GCC. For other preprocessor values, you can also see this SourceForge page. And of course, you can define your own macro or pseudo-constants using the #define directive. #define test_integer 7 Using test_integer anywhere in your code (or macros) will be replaced by 7 after compilation. Note that macros are case-sensitive, just like everything else in C and C++. Now, let's talk about special cases of "#": string-izing a parameter What that means is you can pass a parameters and it is turned into a string, which is what happened in your case. An example: #define NAME_TO_STRING(x) (#x) std::cout << NAME_TO_STRING(Hello) << std::endl; This will turn Hello which is NOT a string, but an identifier, to a string. concatenating two parameters #define CONCAT(x1, x2) x1##x2 #define CONCAT_STRING(x1, x2) CONCAT(#x1,#x2) #define CONCATENATE(x1, x2) CONCAT_STRING(x1, x2) (yes, it doesn't work directly, you need a level of indirection for preprocessor concatenation to work; indirection means passing it again to a different macro). std::cout << CONCATENATE(Hello,World) << std::endl; This will turn Hello and World which are identifiers, to a concatenated string: HelloWorld. Now, regarding usage of # and ##, that's a more advanced topic. There are many use cases from macro-magic (which might seem cool when you see it implemented - for examples, check the Unreal Engine as it's extensively used there, but be warned, such programming methods are not encouraged), helpers, some constant definitions (think #define TERRA_GRAV 9.807) and even help in some compile-time checks, for example using constexpr from the newest standards. If you're curious what is the advantage of using #define versus a const float or double, it might also be to not be part of the code (there is no actual syntax check on macros if they are not used). In regards to helper macros, the most common are defining exports when building a library (search __declspec for MSVS and __attribute__ for GCC), the old style inclusion limitators (now replaced by #pragma once) to stop a *.h, *.hxx or *.hpp from being included multiple times in projects and debug handling (search for _DEBUG and assertions on Google). This paragraph handles slightly more advanced topics so I won't cover them here. I tried to keep the explanation as simple as possible, so the terminology is not that formal. But if you really are curious, I am sure you can find more details online or you can post a comment on this answer :) A: In C++, the # symbol is known as the "stringizing" or "stringify" operator. It is used in macros to convert a preprocessor token into a string literal. For example, in the macro you provided: #define SET_ARRAY(field, type) \ foo.field = bar[#field].data<type>(); the #field part of the macro expands to a string literal that contains the name of the field argument passed to the macro. So if the macro is called like this: SET_ARRAY(struct_member_name, float); the #field part of the macro would be replaced with "struct_member_name", so the expanded macro would look like this: foo.struct_member_name = bar["struct_member_name"].data<float>(); This is why the code appears to be using the struct member name as an array index - the string literal containing the member name is being used as the index for the bar array. The purpose of this macro is to make it easy to set a member of the foo structure to point to a piece of data in the bar array, based on the name of the structure member. So, for example, if the foo structure has a member named struct_member_name, then calling SET_ARRAY(struct_member_name, float) will set the struct_member_name member of foo to point to the data in the bar array with the key "struct_member_name", and the type of that data will be float.
Use of '#' in unexpected way
There's a macro defined as: #define SET_ARRAY(field, type) \ foo.field = bar[#field].data<type>(); foo is a structure with members that are of type int or float *. bar is of type cnpy::npz_t (data loaded from .npz file). I understand that the macro is setting the structure member pointer so that it is pointing to the corresponding data in bar from the .npy file contained in the .npz file, but I'm wondering about the usage bar[#field]. When I ran the code through the preprocessor, I get: foo.struct_member_name = bar["struct_member_name"].data<float>(); but I've never seen that type of usage either. It looks like the struct member variable name is somehow getting converted to an array index or memory offset that resolves to the data within the cnpy::npz_t structure. Can anyone explain how that is happening?
[ "# is actually a preprocessor marker. That means preprocessor commands (not functions), formally called \"preprocessor directives\", are being executed at compile time. Apart from commands, you'll also find something akin to constants (meaning they have predefined values, either static or dynamic - yes I used the term constants loosely, but I am oversimplifying this right now), but they aren't constants \"in that way\", they just seem like that to us.\nA number of preprocessor commands that you will find are:\n#define, #include, #undef, #if (yes, different from the normal \"if\" in code), #elif, #endif, #error - all those must be prefixed by a \"#\".\nSome values might be the __FILE__, __LINE__, __cplusplus and more. These are not prefixed by #, but can be used in preprocessor macros. The values are dynamically set by the compiler, depending on context.\nFor more information on macros, you can check the MS Learn page for MSVS or the GNU page for GCC. For other preprocessor values, you can also see this SourceForge page.\nAnd of course, you can define your own macro or pseudo-constants using the #define directive.\n#define test_integer 7\n\nUsing test_integer anywhere in your code (or macros) will be replaced by 7 after compilation. Note that macros are case-sensitive, just like everything else in C and C++.\nNow, let's talk about special cases of \"#\":\n\nstring-izing a parameter\nWhat that means is you can pass a parameters and it is turned into a string, which is what happened in your case. An example:\n#define NAME_TO_STRING(x) (#x)\n\nstd::cout << NAME_TO_STRING(Hello) << std::endl;\n\nThis will turn Hello which is NOT a string, but an identifier, to a string.\n\nconcatenating two parameters\n#define CONCAT(x1, x2) x1##x2\n#define CONCAT_STRING(x1, x2) CONCAT(#x1,#x2)\n#define CONCATENATE(x1, x2) CONCAT_STRING(x1, x2)\n\n(yes, it doesn't work directly, you need a level of indirection for preprocessor concatenation to work; indirection means passing it again to a different macro).\nstd::cout << CONCATENATE(Hello,World) << std::endl;\n\nThis will turn Hello and World which are identifiers, to a concatenated string: HelloWorld.\n\n\nNow, regarding usage of # and ##, that's a more advanced topic. There are many use cases from macro-magic (which might seem cool when you see it implemented - for examples, check the Unreal Engine as it's extensively used there, but be warned, such programming methods are not encouraged), helpers, some constant definitions (think #define TERRA_GRAV 9.807) and even help in some compile-time checks, for example using constexpr from the newest standards.\nIf you're curious what is the advantage of using #define versus a const float or double, it might also be to not be part of the code (there is no actual syntax check on macros if they are not used).\nIn regards to helper macros, the most common are defining exports when building a library (search __declspec for MSVS and __attribute__ for GCC), the old style inclusion limitators (now replaced by #pragma once) to stop a *.h, *.hxx or *.hpp from being included multiple times in projects and debug handling (search for _DEBUG and assertions on Google). This paragraph handles slightly more advanced topics so I won't cover them here.\nI tried to keep the explanation as simple as possible, so the terminology is not that formal. But if you really are curious, I am sure you can find more details online or you can post a comment on this answer :)\n", "In C++, the # symbol is known as the \"stringizing\" or \"stringify\" operator. It is used in macros to convert a preprocessor token into a string literal. For example, in the macro you provided:\n#define SET_ARRAY(field, type) \\\n foo.field = bar[#field].data<type>();\n\n\nthe #field part of the macro expands to a string literal that contains the name of the field argument passed to the macro. So if the macro is called like this:\nSET_ARRAY(struct_member_name, float);\n\nthe #field part of the macro would be replaced with \"struct_member_name\", so the expanded macro would look like this:\nfoo.struct_member_name = bar[\"struct_member_name\"].data<float>();\n\nThis is why the code appears to be using the struct member name as an array index - the string literal containing the member name is being used as the index for the bar array.\nThe purpose of this macro is to make it easy to set a member of the foo structure to point to a piece of data in the bar array, based on the name of the structure member. So, for example, if the foo structure has a member named struct_member_name, then calling SET_ARRAY(struct_member_name, float) will set the struct_member_name member of foo to point to the data in the bar array with the key \"struct_member_name\", and the type of that data will be float.\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "c_preprocessor" ]
stackoverflow_0074657691_c++_c_preprocessor.txt
Q: Stylesheet not loaded because of MIME type I'm working on a website that uses Gulp.js to compile and browser sync to keep the browser synchronised with my changes. The Gulp.js task compiles everything properly, but on the website, I'm unable to see any style, and the console shows this error message: Refused to apply style from 'http://localhost:3000/assets/styles/custom-style.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled. Now, I don't really understand why this happens. The HTML includes the file like this (which I am pretty sure is correct): <link rel="stylesheet" type="text/css" href="assets/styles/custom-style.css"/> And the style sheet is a merge between Bootstrap and Font Awesome styles for now (nothing custom yet). The path is correct as well, as this is the folder structure: index.html assets |-styles |-custom-style.css But I keep getting the error. What could it be? Is this something (maybe a setting?) for gulp/browsersync maybe? A: For Node.js applications, check your configuration: app.use(express.static(__dirname + '/public')); Notice that /public does not have a forward slash at the end, so you will need to include it in your href option of your HTML: href="/css/style.css"> If you did include a forward slash (/public/) then you can just do href="css/style.css". A: The issue, I think, was with a CSS library starting with comments. While in development, I do not minify files and I don't remove comments. This meant that the stylesheet started with some comments, causing it to be seen as something different from CSS. Removing the library and putting it into a vendor file (which is ALWAYS minified without comments) solved the issue. Again, I'm not 100% sure this is a fix, but it's still a win for me as it works as expected now. A: In most cases, this could be simply the CSS file path is wrong. So the web server returns status: 404 with some Not Found content payload of html type. The browser follows this (wrong) path from <link rel="stylesheet" ...> tag with the intention of applying CSS styles. But the returned content type contradicts so that it logs an error. A: This error can also come up when you're not referring to your CSS file properly. For example, if your link tag is <link rel="stylesheet" href="styles.css"> but your CSS file is named style.css (without the second s) then there is a good chance that you will see this error. A: I had this error for a Bootstrap template. <link href="starter-template.css" rel="stylesheet"> Then I removed the rel="stylesheet" from the link, i.e.: <link href="starter-template.css"> And everything works fine. Try this if you are using Bootstrap templates. A: I have changed my href to src. So from this: <link rel="stylesheet" href="dist/photoswipe.css"> to this: <link rel="stylesheet" src="dist/photoswipe.css"> It worked. I don't know why, but it did the job. A: Make a folder just below/above the style.css file as per the Angular structure and provide a link like <link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">. A: Comments in your file will trip this. Some minifiers will not remove comments. Also If you use Node.js and set your static files using express such as: app.use(express.static(__dirname + '/public')); You need to properly address the files. In my case both were the issue, so I prefixed my CSS links with "/css/styles.css". Example: <link type="text/css" rel="stylesheet" href='/css/styles.css"> This solution is perfect as the path is the main issue for CSS not getting rendering A: In addition to using: <base href="/"> Remove the rel="stylesheet" part from your CSS links: <link type="text/css" href="assets/styles/custom-style.css"/> A: I simply referenced the CSS file (an Angular theme in my case) in the styles section of my Angular 6 build configuration in angular.json: This does not answer the question, but it might be a suitable workaround, as it was for me. A: I know it might be out of context but linking a non existed file might cause this issue as it happened to me before. <!-- bootstrap grid --> <link rel="stylesheet" href="./css/bootstrap-grid.css" /> If this file does not exist you will face that issue. A: The problem is that if you have a relative path, and you navigate to a nested page, that would resolve to the wrong path: <link rel="stylesheet" href='./index.css'> so the simple solution was to remove the . since mine is a single-page application. Like this: <link rel="stylesheet" href='/index.css'> so it always resolves to /index.css There are a lot of answers to this question but none of them seem to really work. If you remove rel="stylesheet" it will stop the errors but won't apply the stylesheets. The real solution: Just remove the .. It works then. A: As mentioned solutions in this post, some of the solutions worked for me, but CSS does not apply on the page. Simply, I just moved the "css" directory into the "Assest/" directory and everything works fine. <link rel="stylesheet" type="text/css" href="assets/css/bootstrap.css"> <link rel="stylesheet" type="text/css" href="assets/css/site.css" > A: I got the same issue and then I checked that I wrote: <base href="./"> in index.html Then I changed to <base href="/"> And then it worked fine. A: Also for others using Angular-CLI and publishing to a sub-folder on the webserver, check this answer: When you're deploying to a non-root path within a domain, you'll need to manually update the <base href="/"> tag in your dist/index.html. In this case, you will need to update to <base href="/sub-folder/"> https://github.com/angular/angular-cli/issues/1080 A: I had this problem with a site I knew worked online when I moved it to localhost and PhpStorm. This worked fine online: <link rel="stylesheet" href="/css/additional.css"> But for localhost I needed to get rid of the slash: <link rel="stylesheet" href="css/additional.css"> So I am reinforcing a few answers provided here already - it is likely to be a path or spelling mistake rather than any complicated server setup problem. The error in the console is a red herring; the network tab needs to be checked for the 404 first. Among the answers provided here are a few solutions that are not correct. The addition of type="text/html" or changing href to src is not the answer. If you want to have all of the attributes so it validates on the pickiest of validators and your IDE then the media value should be provided and the rel should be stylesheet, e.g.: <link rel="stylesheet" href="css/additional.css" type="text/css" media="all"> A: I have had the same problem. If your project's structure is like the following tree: index.html assets |-styles |-custom-style.css server |- server.js I recommend to add the following piece of code in server.js: var path = require('path') var express = require('express') var app = express() app.use('/assets', express.static(path.join(__dirname, "../assets"))); Note: Path is a built-in Node.js module, so it doesn't need to install this package via npm. A: You can open the Google Chrome tools, select the network tab, reload your page and find the file request of the CSS and look for what it have inside the file. Maybe you did something wrong when you merged the two libraries in your file, including some characters or headers not properly for CSS? A: Adding to a long list of answers, this issue also happened to me because I did not realize the path was wrong from a browser-sync point of view. Given this simple folder structure: package.json app |-index.html |-styles |-style.css The href attribute inside <link> in file index.html has to be app/styles/style.css and not styles/style.css. A: In case you are using Express.js without any JavaScript code, try with: app.use(express.static('public')); As an example, my CSS file is at public/stylesheets/app.css. A: At times, this happens when the CSS file is not found. It's worth checking your base URL / path to the file. A: How I solved this. For Node.js applications, you need to set your **public** folder configuration. // Express js app.use(express.static(__dirname + '/public')); Otherwise, you need to do like href="public/css/style.css". <link href="public/assets/css/custom.css"> <script src="public/assets/js/scripts.js"></script> Note: It will work for http://localhost:3000/public/assets/css/custom.css. But couldn't work after build. You need to set app.use(express.static(__dirname + '/public')); for Express A: For a Node.js application, just use this after importing all the required modules in your server file: app.use(express.static(".")); express.static built-in middleware function in Express and this in your .html file: <link rel="stylesheet" href="style.css"> A: By going into my browsers console → Network → style.css ...clicked on it and it showed "cannot get /path/to/my/CSS", this told me my link was wrong. I changed that to the path of my CSS file. The original path before change was localhost:3000/Example/public/style.css. Changing it to localhost:3000/style.css solved it. If you are serving the file from app.use(express.static(path.join(__dirname, "public"))); or app.use(express.static("public")); your server would pass "that folder" to the browser so adding a "/yourCssName.css" link in your browser solves it By adding other routes in your browser CSS link, you'd be telling the browser to search for the css in route specified. In summary: Check where your browser CSS link points to. A: This is specific to TypeScript and Express.js I Ctrl + F'd "TypeScript" and ".ts" and found nothing in these answers, so I'll add my solution here, since it was caused by (my inexperience with) TypeScript, and the solutions I've read don't explicit solve this particular issue. The problem was that TypeScript was compiling my app.ts file into a JavaScript file in my project's dist directory, dist/app.js. Here's my directory structure. See if you can spot the problem: . ├── app.ts ├── dist │   ├── app.js │   ├── app.js.map │   └── js │   ├── dbclient.js │   ├── dbclient.js.map │   ├── mutators.js │   └── mutators.js.map ├── public │   ├── css │   │   └── styles.css ├── tsconfig.json ├── tslint.json └── views ├── index.hbs └── results.hbs My problem is that in app.ts, I was telling express to set my public directory as /public, which would be a valid path if Node.js actually were running TypeScript. But Node.js is running the compiled JavaScript, app.js, which is in the dist directory. So having app.ts pretend it's dist/app.js solved my problem. Thus, I fixed the problem in app.ts by changing app.use(e.static(path.join(__dirname, "/public"))); to app.use(e.static(path.join(__dirname, "../public"))); A: https://github.com/froala/angular-froala/issues/170#issuecomment-386117678 Found the above solution of adding href="/"> Just before the style tag in index.html A: I was working with the React application and also had this error which led me here. This is what helped me. Instead of adding <link> to the index.html, I added an import to the component where I need to use this style sheet: import 'path/to/stylesheet.css'; A: In my case, when I was deploying the package live, I had it out of the public HTML folder. It was for a reason. But apparently a strict MIME type check has been activated, and I am not too sure if it's on my side or by the company I am hosting with. But as soon as I moved the styling folder in the same directory as the index.php file I stopped getting the error, and styling was activated perfectly. A: If you are setting Styles in JavaScript as: var cssLink = document.createElement("link"); cssLink.href = "./content.component.scss"; cssLink.rel = "stylesheet"; /* → */ cssLink.type = "html/css"; (iframe as HTMLIFrameElement).contentDocument.head.appendChild(cssLink); Then just change field cssLint.type (denoted by the arrow in the above description) to "MIME": cssLink.type = "MIME"; It will help you to get rid of the error. A: Remove rel="stylesheet" and add type="text/html". So it will look like this - <link href="styles.css" type="text/html" /> A: Bootstrap styles not loading #3411 https://github.com/angular/angular-cli/issues/3411 I installed Bootstrap v. 3.3.7 npm install bootstrap --save Then I added the needed script files to apps[0].scripts in the angular-cli.json file: "scripts": [ "../node_modules/bootstrap/dist/js/bootstrap.js" ], // And the Bootstrap CSS to the apps[0].styles array "styles": [ "styles.css", "../node_modules/bootstrap/dist/css/bootstrap.css" ], I restarted ng serve It worked for me. A: If the browser can not find a related CSS file, it could give this error. If you use an Angular application you do not have to use a CSS file path in file index.html: <link href="xxx.css" rel="stylesheet"> You could use the related CSS file path in the styles.css file. @import "../node_modules/material-design-icons-iconfont/dist/material-design-icons.css"; A: One of the main reasons for the issue is the CSS file, which it is trying to load, isn't a valid CSS file. Causes: Invalid MIME type Having JavaScript code inside style sheet - (may occur due to incorrect Webpack bundler configuration) Check the file which you're trying to load is a valid CSS style sheet (get the server URL of the file from the network tab and hit in a new tab and verify). Useful info for consideration when using <link> inside the body tag. Though having a link tag inside the body is not the standard way to use the tag. But we can use it for page optimization (more information: Optimize CSS Delivery) / if the business use case demands (when you serve the body of the content and server configured to have to render the HTML page with content provided). While keeping inside the body tag we have to add the attribute itemProperty in the link tag like <body> <!-- … --> <link itemprop="url" href="http://en.wikipedia.org/wiki/The_Catcher_in_the_Rye" /> <!-- … --> </body> For more information on itemProperty, have a look in Can I use <link> tags in the body of an HTML document?. A: This issue happens when you're using a command-line tool for either React or Angular, so the key is to copy the entire final build from those tools since they initialize their own light servers which confuses your URLs with the back end server you've created... Take that whole build folder and dump it on the asset folder of your back end server project and reference them from your back end server and not the server which ships with Angular or React. Otherwise, you're using it as the front end from a certain API server. A: In my case I had to both make sure that the link was relative and the rel property was after the href property: <link href="/assets/styles/iframe.css" rel="stylesheet"> A: I also had the same problem: here is the solution. Don't write public in path of the CSS link in your .html file. Although your .css file present in the public folder. Example: JavaScript file app.use(express.static("public")); HTML file Use this <link href="styles.css" rel="stylesheet"> instead of <link href="index/styles.css" rel="stylesheet"> A: The solution from this thread solved my problem: Not sure if this will help anyone, but if you are using angular-cli, I fixed this by removing the CSS reference from my index.html and adding it to the angular-cli.json file under the "style" portion. After restarting my webserver I no longer had that issue. A: I came across this issue having the same problem adding a custom look and feel to an Azure B2C user flow. I found that the root that the HTML page referred to was ../oauth/v2 (i.e. the OAuth server path) rather than the path to my storage bob. Using the full URL of the pages fixed the problem for me. A: Check if you have a compression enabled or disabled. If you use it or someone enabled it, then app.use(express.static(xxx)) won't help. Make sure your server allows for compression. I started to see the similar error when I added Brotli and Compression Plugins to my Webpack. Then your server needs to support this type of content compression too. If you are using Express.js then the following should help: app.use(url, expressStaticGzip(dir, gzipOptions) Module is called: express-static-gzip My settings are: const gzipOptions = { enableBrotli: true, customCompressions: [{ encodingName: 'deflate', fileExtension: 'zz' }], orderPreference: ['br'] } A: Nothing that everyone comments has helped me, the solution has been to fix the routes since from the main page I found the file but when I navigated it disappeared, the correct solution would say to have these things well placed A: Triple check the name and path of the file. In my case I had something like this content in the target folder: lib foobar.bundle.js foobr.css And this link: <link rel="stylesheet" href="lib/foobar.css"> I guess that the browser was trying to load the JavaScript file and complaining about its MIME type instead of giving me a file not found error. A: I had this error, in Angular. The way I solved it was to put an ngIf on my link element, so it didn't appear in the DOM until my dynamic URL was populated. It may be unrelated to the question a little bit, but I ended up here looking for an answer. <link *ngIf="cssUrl" rel="stylesheet" type="text/css" [href]="sanitizer.bypassSecurityTrustResourceUrl(cssUrl)"> A: I met this issue. I refused to apply the style from 'http://m.b2b-v2-pre1.jcloudec.com/mobile-dynamic-load-component-view/resource/js/resource/js/need/layer.css?2.0' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled. Changing the path could solve this issue. A: I had the same problem, and in my case it was due to the fact that I had manually tried to import (s)CSS files in the HTML file (like we always do with static websites), when in fact the files had to be imported in the entry point JavaScript file used by Webpack. When the stylesheets were imported into the main JavaScript file, rather than manually written in the HTML, it all worked as expected. A: Maybe you have an authorization issue with Buddy: Try these steps: Go to ISS → find your project listed → click on it → Click on Edit permissions in the right pane under actions You will see your project properties wizard. Click on Securities Under groups or user names → If you see 'Authenticated users', then you are authorized if not then you have to it. Once you add it (yourself or with the help of your administrator if you work in a company :)), the website will start loading resources. You may need to restart your project under ISS. A: Yet another reason for this error may be that the CSS file permissions are incorrect. In my case the file was inaccessible because ownership had been changed to the root user-- which happened due to pushing Git files as the root user. A: In my case it was the execution order of tasks ran by Grunt. It was executing the task connect that sets up the local server and automatically opens the application in a new tab before executing less and postcss that transpile styles. Basically, I changed this: grunt.registerTask('default', 'Start server. Process styles.', ['connect', 'less', 'postcss']); To this: grunt.registerTask('default', 'Process styles. Start server.', ['less', 'postcss', 'connect']); And it worked! A: I faced a similar error and found that the error was adding '/' at the end of the style.css link href. Replacing <link rel="stylesheet" href="style.css/"> with <link rel="stylesheet" href="style.css"> fixed the issue. A: In case you're working with a Node.js application, make sure that the \public folder is immediately below the root folder, and not within the views folder. This can become troublesome. Move the \public immediately below the root and then restart the server and witness the changes. A: I almost tried all given solutions. The problem for me was I didn't have any MIME types option in IIS, that is, I was missing this Windows feature. The solution for me was: "And if you're on a non-server OS like Windows 8 or 10, do a search on the start page for "Turn Windows features on or off" and enable: Internet Information Services -> World Wide Web Services -> Common HTTP Features -> Static Content" Enable IIS Static Content A: I was facing the same problem. I change my directories' permission to 755 ((U)ser / owner can read, can write and can execute. (G)roup can read, can't write and can execute. (O)thers can read, can't write and can execute.) and now all the files are loading. You can also try my answer. I hope this will work for you. A: I faced this challenge with Select2. It got resolved after I downloaded the latest version of the library and replaced the one (the CSS and JavaScript files) in my project. A: In my case the problem was solved just after changing the random value in the .scss file. After reloading the problem disappeared and the styles began to load well. Simple, but works :P A: I have used a virtual domain in my XAMPP installation and got this issue. So in the httpd-vshosts.conf file when I checked, I had explicitly pointed to the index.php file and this had caused the issue. So I changed this: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/cv/index.php" ServerName cv.dv </VirtualHost> to this: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/cv/" ServerName cv.dv </VirtualHost> And then the files were loaded without issues. A: In my Angular-Ionic project I've a CSS entry like this for a component which would only load when I request. .searchBar { // --placeholder-color: white; // --color: white; // --icon-color: white; // --border-radius: 20px; // --background: color:rgba(73,76,67,1.0); // --placeholder-opacity: 100%; // background-color: red; } As soon as I commented out all the values inside the CSS class entry, it started working again. I think this was happening due to having these properties background-color with --background together. A: I would like this share some thoughts on this topic, when I place app.use(expres.static('../frontEnd/public')) it don't work, but when I use app.use(express.static('/frontEnd/public')) it works fine. A: Please also check for typos in your filename. This happened to me: style.css.css A: Add Express.js default static middleware as: Import express const express = require("express") Initialize express const app = express() app.use(express.static(__dirname)); Then you need to use complete path of public directory For example: Your file structure inside public directory public > css > style.css Your path will be <link rel="stylesheet" type="text/css" href="./public/css/style.css"/> Note 1: You can use any name instead of public, i.e., assets myassets any other. Note 2: your server should be running in the main directory. If your server is running in a sub directory, then you can come out by using ../ A: I was facing the same issue in EJS and Node.js. I have set up my view engine's public folder like this: app.use('/public', express.static(process.cwd() + '/public')); app.set('view engine', 'ejs'); And my directory structure was like this public | -assets | --CSS |---style.css I was trying to link my Style.css in EJS head like this <link href="/assets/css/style.css" rel="stylesheet" type="text/css"> Adding the public solved the problem (make sure to check network tab) <link href="/public/assets/css/style.css" rel="stylesheet" type="text/css"> A: It's the path that is wrong. My path was href="public/css/style.css". <!-- But it should be href="public/style.css" --> And it didn't work. I tried all other ways, but still it did not work. So I copied the relative path, changed the backslash to forward slash, and then it started to work. If you're not sure of the path, right click the style.css file, click "Copy Relative Path", and then paste it into href="public\style.css" You just need to change the backward slash to a forward slash. href="public/style.css" A: I had the same problem and after hours of debugging I found out that it is related to temporary files not able to be written. Go to Admin → Config → File system. Set the correct temporary directory. Make sure your server has permission to write to that directory. A: Resaving a .ts file (forcing Ionic to rebuild) solved it for me. It doesn't really make sense... but as long as it works, who am I to judge? Here I have seen this workaround: Ionic and Angular 2 - Refused to apply style from 'http://localhost:8100/build/main.css' because its MIME type ('text/html') is not a supported A: I tried to restart my Windows machine and reinstalled the "npm i". It worked for me. A: Faced the similar issue and solved it using this simple fix. If your project is React based then instead of importing your "styles.css" in "index.html", import it in "index.js" which generally resides inside "src" folder of your project. This will make sure that all your routes inside your React Application has access to the styles file. A: For nodejs users, use this. This should solve the problem for Node. app.use(express.static('static')); A: One solution that worked for me was changing the css filename from "style.css" to another name, like "component.css". Worked like a charm. A: Another cause can be the 2 apache .conf files, where, if your configuration forces https, then your server will overlook variables set in your sites-enabled/http-default.conf. For example, if you have "/static" defined in http-default.conf but not https-ssl.conf then your static files may not get found, ie 404. A: The path was different for localhost and while accessible via file:// & when deployed...
Stylesheet not loaded because of MIME type
I'm working on a website that uses Gulp.js to compile and browser sync to keep the browser synchronised with my changes. The Gulp.js task compiles everything properly, but on the website, I'm unable to see any style, and the console shows this error message: Refused to apply style from 'http://localhost:3000/assets/styles/custom-style.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled. Now, I don't really understand why this happens. The HTML includes the file like this (which I am pretty sure is correct): <link rel="stylesheet" type="text/css" href="assets/styles/custom-style.css"/> And the style sheet is a merge between Bootstrap and Font Awesome styles for now (nothing custom yet). The path is correct as well, as this is the folder structure: index.html assets |-styles |-custom-style.css But I keep getting the error. What could it be? Is this something (maybe a setting?) for gulp/browsersync maybe?
[ "For Node.js applications, check your configuration:\napp.use(express.static(__dirname + '/public'));\n\nNotice that /public does not have a forward slash at the end, so you will need to include it in your href option of your HTML:\nhref=\"/css/style.css\">\n\nIf you did include a forward slash (/public/) then you can just do href=\"css/style.css\".\n", "The issue, I think, was with a CSS library starting with comments.\nWhile in development, I do not minify files and I don't remove comments. This meant that the stylesheet started with some comments, causing it to be seen as something different from CSS.\nRemoving the library and putting it into a vendor file (which is ALWAYS minified without comments) solved the issue.\nAgain, I'm not 100% sure this is a fix, but it's still a win for me as it works as expected now.\n", "In most cases, this could be simply the CSS file path is wrong. So the web server returns status: 404 with some Not Found content payload of html type.\nThe browser follows this (wrong) path from <link rel=\"stylesheet\" ...> tag with the intention of applying CSS styles. But the returned content type contradicts so that it logs an error.\n\n", "This error can also come up when you're not referring to your CSS file properly.\nFor example, if your link tag is\n<link rel=\"stylesheet\" href=\"styles.css\">\n\nbut your CSS file is named style.css (without the second s) then there is a good chance that you will see this error.\n", "I had this error for a Bootstrap template.\n<link href=\"starter-template.css\" rel=\"stylesheet\">\n\nThen I removed the rel=\"stylesheet\" from the link, i.e.:\n<link href=\"starter-template.css\">\n\nAnd everything works fine. Try this if you are using Bootstrap templates.\n", "I have changed my href to src. So from this:\n<link rel=\"stylesheet\" href=\"dist/photoswipe.css\">\n\nto this:\n<link rel=\"stylesheet\" src=\"dist/photoswipe.css\">\n\nIt worked. I don't know why, but it did the job.\n", "Make a folder just below/above the style.css file as per the Angular structure and provide a link like <link href=\"vendor/bootstrap/css/bootstrap.min.css\" rel=\"stylesheet\">.\n\n", "Comments in your file will trip this. Some minifiers will not remove comments.\nAlso\nIf you use Node.js and set your static files using express such as:\napp.use(express.static(__dirname + '/public'));\n\nYou need to properly address the files.\nIn my case both were the issue, so I prefixed my CSS links with \"/css/styles.css\".\nExample:\n<link type=\"text/css\" rel=\"stylesheet\" href='/css/styles.css\">\n\nThis solution is perfect as the path is the main issue for CSS not getting rendering\n", "In addition to using:\n<base href=\"/\">\n\nRemove the rel=\"stylesheet\" part from your CSS links:\n<link type=\"text/css\" href=\"assets/styles/custom-style.css\"/>\n\n", "I simply referenced the CSS file (an Angular theme in my case) in the styles section of my Angular 6 build configuration in angular.json:\n\nThis does not answer the question, but it might be a suitable workaround, as it was for me.\n", "I know it might be out of context but linking a non existed file might cause this issue as it happened to me before.\n<!-- bootstrap grid -->\n<link rel=\"stylesheet\" href=\"./css/bootstrap-grid.css\" />\n\nIf this file does not exist you will face that issue.\n", "The problem is that if you have a relative path, and you navigate to a nested page, that would resolve to the wrong path:\n<link rel=\"stylesheet\" href='./index.css'>\n\nso the simple solution was to remove the . since mine is a single-page application.\nLike this:\n<link rel=\"stylesheet\" href='/index.css'>\n\nso it always resolves to /index.css\nThere are a lot of answers to this question but none of them seem to really work. If you remove rel=\"stylesheet\" it will stop the errors but won't apply the stylesheets.\nThe real solution:\nJust remove the .. It works then.\n", "As mentioned solutions in this post, some of the solutions worked for me, but CSS does not apply on the page.\nSimply, I just moved the \"css\" directory into the \"Assest/\" directory and everything works fine.\n<link rel=\"stylesheet\" type=\"text/css\" href=\"assets/css/bootstrap.css\">\n<link rel=\"stylesheet\" type=\"text/css\" href=\"assets/css/site.css\" >\n\n", "I got the same issue and then I checked that I wrote:\n<base href=\"./\"> in index.html\nThen I changed to\n<base href=\"/\">\n\nAnd then it worked fine.\n", "Also for others using Angular-CLI and publishing to a sub-folder on the webserver, check this answer:\nWhen you're deploying to a non-root path within a domain, you'll need to manually update the <base href=\"/\"> tag in your dist/index.html.\nIn this case, you will need to update to <base href=\"/sub-folder/\"> \nhttps://github.com/angular/angular-cli/issues/1080\n", "I had this problem with a site I knew worked online when I moved it to localhost and PhpStorm.\nThis worked fine online:\n<link rel=\"stylesheet\" href=\"/css/additional.css\">\n\nBut for localhost I needed to get rid of the slash:\n<link rel=\"stylesheet\" href=\"css/additional.css\">\n\nSo I am reinforcing a few answers provided here already - it is likely to be a path or spelling mistake rather than any complicated server setup problem. The error in the console is a red herring; the network tab needs to be checked for the 404 first.\nAmong the answers provided here are a few solutions that are not correct. The addition of type=\"text/html\" or changing href to src is not the answer.\nIf you want to have all of the attributes so it validates on the pickiest of validators and your IDE then the media value should be provided and the rel should be stylesheet, e.g.:\n<link rel=\"stylesheet\" href=\"css/additional.css\" type=\"text/css\" media=\"all\">\n\n", "I have had the same problem.\nIf your project's structure is like the following tree:\nindex.html\nassets\n|-styles\n |-custom-style.css\nserver\n |- server.js\n\nI recommend to add the following piece of code in server.js:\nvar path = require('path')\nvar express = require('express')\nvar app = express()\n\napp.use('/assets', express.static(path.join(__dirname, \"../assets\")));\n\nNote: Path is a built-in Node.js module, so it doesn't need to install this package via npm.\n", "You can open the Google Chrome tools, select the network tab, reload your page and find the file request of the CSS and look for what it have inside the file.\nMaybe you did something wrong when you merged the two libraries in your file, including some characters or headers not properly for CSS?\n", "Adding to a long list of answers, this issue also happened to me because I did not realize the path was wrong from a browser-sync point of view.\nGiven this simple folder structure:\npackage.json\napp\n |-index.html\n |-styles\n |-style.css\n\nThe href attribute inside <link> in file index.html has to be app/styles/style.css and not styles/style.css.\n", "In case you are using Express.js without any JavaScript code, try with:\napp.use(express.static('public'));\n\nAs an example, my CSS file is at public/stylesheets/app.css.\n", "At times, this happens when the CSS file is not found. It's worth checking your base URL / path to the file.\n", "How I solved this.\nFor Node.js applications, you need to set your **public** folder configuration.\n// Express js\napp.use(express.static(__dirname + '/public'));\n\nOtherwise, you need to do like href=\"public/css/style.css\".\n<link href=\"public/assets/css/custom.css\">\n<script src=\"public/assets/js/scripts.js\"></script>\n\n\nNote: It will work for http://localhost:3000/public/assets/css/custom.css. But couldn't work after build. You need to set app.use(express.static(__dirname + '/public')); for Express\n\n", "For a Node.js application, just use this after importing all the required modules in your server file:\napp.use(express.static(\".\"));\n\n\nexpress.static built-in middleware function in Express and this in your .html file: <link rel=\"stylesheet\" href=\"style.css\">\n\n", "By going into my browsers console → Network → style.css ...clicked on it and it showed \"cannot get /path/to/my/CSS\", this told me my link was wrong.\nI changed that to the path of my CSS file.\nThe original path before change was localhost:3000/Example/public/style.css. Changing it to localhost:3000/style.css solved it.\nIf you are serving the file from app.use(express.static(path.join(__dirname, \"public\"))); or app.use(express.static(\"public\")); your server would pass \"that folder\" to the browser so adding a \"/yourCssName.css\" link in your browser solves it\nBy adding other routes in your browser CSS link, you'd be telling the browser to search for the css in route specified.\nIn summary: Check where your browser CSS link points to.\n", "This is specific to TypeScript and Express.js\nI Ctrl + F'd \"TypeScript\" and \".ts\" and found nothing in these answers, so I'll add my solution here, since it was caused by (my inexperience with) TypeScript, and the solutions I've read don't explicit solve this particular issue.\nThe problem was that TypeScript was compiling my app.ts file into a JavaScript file in my project's dist directory, dist/app.js.\nHere's my directory structure. See if you can spot the problem:\n.\n├── app.ts\n├── dist\n│   ├── app.js\n│   ├── app.js.map\n│   └── js\n│   ├── dbclient.js\n│   ├── dbclient.js.map\n│   ├── mutators.js\n│   └── mutators.js.map\n├── public\n│   ├── css\n│   │   └── styles.css\n├── tsconfig.json\n├── tslint.json\n└── views\n ├── index.hbs\n └── results.hbs\n\nMy problem is that in app.ts, I was telling express to set my public directory as /public, which would be a valid path if Node.js actually were running TypeScript. But Node.js is running the compiled JavaScript, app.js, which is in the dist directory.\nSo having app.ts pretend it's dist/app.js solved my problem. Thus, I fixed the problem in app.ts by changing\napp.use(e.static(path.join(__dirname, \"/public\")));\n\nto\napp.use(e.static(path.join(__dirname, \"../public\")));\n\n", "https://github.com/froala/angular-froala/issues/170#issuecomment-386117678\nFound the above solution of adding\nhref=\"/\">\n\nJust before the style tag in index.html\n\n", "I was working with the React application and also had this error which led me here. This is what helped me.\nInstead of adding <link> to the index.html, I added an import to the component where I need to use this style sheet:\nimport 'path/to/stylesheet.css';\n\n", "In my case, when I was deploying the package live, I had it out of the public HTML folder. It was for a reason.\nBut apparently a strict MIME type check has been activated, and I am not too sure if it's on my side or by the company I am hosting with.\nBut as soon as I moved the styling folder in the same directory as the index.php file I stopped getting the error, and styling was activated perfectly.\n", "If you are setting Styles in JavaScript as:\n var cssLink = document.createElement(\"link\");\n cssLink.href = \"./content.component.scss\";\n cssLink.rel = \"stylesheet\";\n\n /* → */ cssLink.type = \"html/css\";\n (iframe as HTMLIFrameElement).contentDocument.head.appendChild(cssLink);\n\nThen just change field cssLint.type (denoted by the arrow in the above description) to \"MIME\":\n cssLink.type = \"MIME\";\n\nIt will help you to get rid of the error.\n", "Remove rel=\"stylesheet\" and add type=\"text/html\". So it will look like this -\n<link href=\"styles.css\" type=\"text/html\" />\n\n", "Bootstrap styles not loading #3411\nhttps://github.com/angular/angular-cli/issues/3411\n\nI installed Bootstrap v. 3.3.7\nnpm install bootstrap --save\n\nThen I added the needed script files to apps[0].scripts in the angular-cli.json file:\n\"scripts\": [\n \"../node_modules/bootstrap/dist/js/bootstrap.js\"\n],\n\n// And the Bootstrap CSS to the apps[0].styles array\n\n\"styles\": [\n \"styles.css\",\n \"../node_modules/bootstrap/dist/css/bootstrap.css\"\n],\n\nI restarted ng serve\n\nIt worked for me.\n", "If the browser can not find a related CSS file, it could give this error.\nIf you use an Angular application you do not have to use a CSS file path in file index.html:\n<link href=\"xxx.css\" rel=\"stylesheet\"> \n\nYou could use the related CSS file path in the styles.css file.\n@import \"../node_modules/material-design-icons-iconfont/dist/material-design-icons.css\";\n\n", "One of the main reasons for the issue is the CSS file, which it is trying to load, isn't a valid CSS file.\nCauses:\n\nInvalid MIME type\nHaving JavaScript code inside style sheet - (may occur due to incorrect Webpack bundler configuration)\n\nCheck the file which you're trying to load is a valid CSS style sheet (get the server URL of the file from the network tab and hit in a new tab and verify).\nUseful info for consideration when using <link> inside the body tag.\nThough having a link tag inside the body is not the standard way to use the tag. But we can use it for page optimization (more information: Optimize CSS Delivery) / if the business use case demands (when you serve the body of the content and server configured to have to render the HTML page with content provided).\nWhile keeping inside the body tag we have to add the attribute itemProperty in the link tag like\n<body>\n <!-- … -->\n <link itemprop=\"url\" href=\"http://en.wikipedia.org/wiki/The_Catcher_in_the_Rye\" />\n <!-- … -->\n</body>\n\nFor more information on itemProperty, have a look in Can I use <link> tags in the body of an HTML document?.\n", "This issue happens when you're using a command-line tool for either React or Angular, so the key is to copy the entire final build from those tools since they initialize their own light servers which confuses your URLs with the back end server you've created...\nTake that whole build folder and dump it on the asset folder of your back end server project and reference them from your back end server and not the server which ships with Angular or React. Otherwise, you're using it as the front end from a certain API server.\n", "In my case I had to both make sure that the link was relative and the rel property was after the href property:\n<link href=\"/assets/styles/iframe.css\" rel=\"stylesheet\"> \n\n", "I also had the same problem: here is the solution. Don't write public in path of the CSS link in your .html file. Although your .css file present in the public folder.\nExample:\nJavaScript file\napp.use(express.static(\"public\"));\n\nHTML file\nUse this\n<link href=\"styles.css\" rel=\"stylesheet\">\n\ninstead of\n<link href=\"index/styles.css\" rel=\"stylesheet\">\n\n", "The solution from this thread solved my problem:\n\nNot sure if this will help anyone, but if you are using angular-cli, I fixed this by removing the CSS reference from my index.html and adding it to the angular-cli.json file under the \"style\" portion.\n After restarting my webserver I no longer had that issue.\n\n", "I came across this issue having the same problem adding a custom look and feel to an Azure B2C user flow. I found that the root that the HTML page referred to was ../oauth/v2 (i.e. the OAuth server path) rather than the path to my storage bob.\nUsing the full URL of the pages fixed the problem for me.\n", "Check if you have a compression enabled or disabled. If you use it or someone enabled it, then app.use(express.static(xxx)) won't help. Make sure your server allows for compression.\nI started to see the similar error when I added Brotli and Compression Plugins to my Webpack. Then your server needs to support this type of content compression too.\nIf you are using Express.js then the following should help:\napp.use(url, expressStaticGzip(dir, gzipOptions)\nModule is called: express-static-gzip\nMy settings are:\nconst gzipOptions = {\n enableBrotli: true,\n customCompressions: [{\n encodingName: 'deflate',\n fileExtension: 'zz'\n }],\n orderPreference: ['br']\n}\n\n", "Nothing that everyone comments has helped me, the solution has been to fix the routes since from the main page I found the file but when I navigated it disappeared, the correct solution would say to have these things well placed\n\n\n", "Triple check the name and path of the file. In my case I had something like this content in the target folder:\nlib\n foobar.bundle.js\n foobr.css\n\nAnd this link:\n<link rel=\"stylesheet\" href=\"lib/foobar.css\">\n\nI guess that the browser was trying to load the JavaScript file and complaining about its MIME type instead of giving me a file not found error.\n", "I had this error, in Angular. The way I solved it was to put an ngIf on my link element, so it didn't appear in the DOM until my dynamic URL was populated.\nIt may be unrelated to the question a little bit, but I ended up here looking for an answer.\n<link *ngIf=\"cssUrl\" rel=\"stylesheet\" type=\"text/css\" [href]=\"sanitizer.bypassSecurityTrustResourceUrl(cssUrl)\">\n\n", "I met this issue.\nI refused to apply the style from 'http://m.b2b-v2-pre1.jcloudec.com/mobile-dynamic-load-component-view/resource/js/resource/js/need/layer.css?2.0' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.\nChanging the path could solve this issue.\n", "I had the same problem, and in my case it was due to the fact that I had manually tried to import (s)CSS files in the HTML file (like we always do with static websites), when in fact the files had to be imported in the entry point JavaScript file used by Webpack.\nWhen the stylesheets were imported into the main JavaScript file, rather than manually written in the HTML, it all worked as expected.\n", "Maybe you have an authorization issue with Buddy:\nTry these steps:\n\nGo to ISS → find your project listed → click on it → Click on Edit permissions in the right pane under actions\nYou will see your project properties wizard. Click on Securities\nUnder groups or user names → If you see 'Authenticated users', then you are authorized if not then you have to it.\nOnce you add it (yourself or with the help of your administrator if you work in a company :)), the website will start loading resources. You may need to restart your project under ISS.\n\n", "Yet another reason for this error may be that the CSS file permissions are incorrect. In my case the file was inaccessible because ownership had been changed to the root user-- which happened due to pushing Git files as the root user.\n", "In my case it was the execution order of tasks ran by Grunt.\nIt was executing the task connect that sets up the local server and automatically opens the application in a new tab before executing less and postcss that transpile styles.\nBasically, I changed this:\ngrunt.registerTask('default', 'Start server. Process styles.', ['connect', 'less', 'postcss']);\n\nTo this:\ngrunt.registerTask('default', 'Process styles. Start server.', ['less', 'postcss', 'connect']);\n\nAnd it worked!\n", "I faced a similar error and found that the error was adding '/' at the end of the style.css link href.\nReplacing <link rel=\"stylesheet\" href=\"style.css/\"> with <link rel=\"stylesheet\" href=\"style.css\"> fixed the issue.\n", "In case you're working with a Node.js application, make sure that the \\public folder is immediately below the root folder, and not within the views folder.\nThis can become troublesome. Move the \\public immediately below the root and then restart the server and witness the changes.\n", "I almost tried all given solutions. The problem for me was I didn't have any MIME types option in IIS, that is, I was missing this Windows feature.\nThe solution for me was:\n\n\"And if you're on a non-server OS like Windows 8 or 10, do a search on the start page for \"Turn Windows features on or off\" and enable: Internet Information Services -> World Wide Web Services -> Common HTTP Features -> Static Content\"\n\nEnable IIS Static Content\n\n", "I was facing the same problem.\nI change my directories' permission to 755 ((U)ser / owner can read, can write and can execute. (G)roup can read, can't write and can execute. (O)thers can read, can't write and can execute.) and now all the files are loading.\nYou can also try my answer. I hope this will work for you.\n", "I faced this challenge with Select2. It got resolved after I downloaded the latest version of the library and replaced the one (the CSS and JavaScript files) in my project.\n", "In my case the problem was solved just after changing the random value in the .scss file. After reloading the problem disappeared and the styles began to load well. Simple, but works :P\n", "I have used a virtual domain in my XAMPP installation and got this issue. So in the httpd-vshosts.conf file when I checked, I had explicitly pointed to the index.php file and this had caused the issue.\nSo I changed this:\n<VirtualHost *:80>\n DocumentRoot \"C:/xampp/htdocs/cv/index.php\"\n ServerName cv.dv\n</VirtualHost>\n\nto this:\n<VirtualHost *:80>\n DocumentRoot \"C:/xampp/htdocs/cv/\"\n ServerName cv.dv\n</VirtualHost>\n\nAnd then the files were loaded without issues.\n", "In my Angular-Ionic project I've a CSS entry like this for a component which would only load when I request.\n .searchBar {\n // --placeholder-color: white;\n // --color: white;\n // --icon-color: white;\n // --border-radius: 20px;\n // --background: color:rgba(73,76,67,1.0);\n // --placeholder-opacity: 100%;\n // background-color: red;\n }\n\nAs soon as I commented out all the values inside the CSS class entry, it started working again.\nI think this was happening due to having these properties background-color with --background together.\n", "I would like this share some thoughts on this topic, when I place\napp.use(expres.static('../frontEnd/public'))\n\nit don't work, but when I use\napp.use(express.static('/frontEnd/public'))\n\nit works fine.\n", "Please also check for typos in your filename. This happened to me:\nstyle.css.css\n\n", "Add Express.js default static middleware as:\nImport express const express = require(\"express\")\nInitialize express const app = express()\napp.use(express.static(__dirname));\n\nThen you need to use complete path of public directory\nFor example:\nYour file structure inside public directory\npublic > css > style.css\nYour path will be\n<link rel=\"stylesheet\" type=\"text/css\" href=\"./public/css/style.css\"/>\n\nNote 1:\nYou can use any name instead of public, i.e., assets myassets any other.\nNote 2: your server should be running in the main directory.\nIf your server is running in a sub directory, then you can come out by using ../\n", "I was facing the same issue in EJS and Node.js.\nI have set up my view engine's public folder like this:\napp.use('/public', express.static(process.cwd() + '/public'));\n\napp.set('view engine', 'ejs');\n\nAnd my directory structure was like this\npublic\n|\n-assets\n |\n --CSS\n |---style.css\n\nI was trying to link my Style.css in EJS head like this\n<link href=\"/assets/css/style.css\" rel=\"stylesheet\" type=\"text/css\">\n\nAdding the public solved the problem (make sure to check network tab)\n<link href=\"/public/assets/css/style.css\" rel=\"stylesheet\" type=\"text/css\">\n\n", "It's the path that is wrong.\nMy path was\nhref=\"public/css/style.css\". <!-- But it should be href=\"public/style.css\" -->\n\nAnd it didn't work. I tried all other ways, but still it did not work. So I copied the relative path, changed the backslash to forward slash, and then it started to work.\nIf you're not sure of the path, right click the style.css file, click \"Copy Relative Path\", and then paste it into\n href=\"public\\style.css\"\n\nYou just need to change the backward slash to a forward slash.\n href=\"public/style.css\"\n\n\n", "I had the same problem and after hours of debugging I found out that it is related to temporary files not able to be written.\nGo to Admin → Config → File system. Set the correct temporary directory. Make sure your server has permission to write to that directory.\n", "Resaving a .ts file (forcing Ionic to rebuild) solved it for me.\nIt doesn't really make sense... but as long as it works, who am I to judge?\nHere I have seen this workaround:\nIonic and Angular 2 - Refused to apply style from 'http://localhost:8100/build/main.css' because its MIME type ('text/html') is not a supported\n", "I tried to restart my Windows machine and reinstalled the \"npm i\".\nIt worked for me.\n", "Faced the similar issue and solved it using this simple fix. If your project is React based then instead of importing your \"styles.css\" in \"index.html\", import it in \"index.js\" which generally resides inside \"src\" folder of your project. This will make sure that all your routes inside your React Application has access to the styles file.\n", "For nodejs users, use this.\nThis should solve the problem for Node.\napp.use(express.static('static'));\n", "One solution that worked for me was changing the css filename from \"style.css\" to another name, like \"component.css\". Worked like a charm.\n", "Another cause can be the 2 apache .conf files, where, if your configuration forces https, then your server will overlook variables set in your sites-enabled/http-default.conf. For example, if you have \"/static\" defined in http-default.conf but not https-ssl.conf then your static files may not get found, ie 404.\n", "The path was different for localhost and while accessible via file:// & when deployed...\n" ]
[ 271, 177, 156, 139, 68, 64, 53, 28, 25, 22, 19, 18, 18, 16, 15, 13, 10, 9, 8, 8, 8, 7, 6, 6, 6, 6, 5, 4, 4, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "You must have imported multiple style sheets. Try to remove one and try again.\n" ]
[ -6 ]
[ "browser_sync", "css", "gulp", "html", "mime_types" ]
stackoverflow_0048248832_browser_sync_css_gulp_html_mime_types.txt
Q: CSS border bottom on Navigation bar I have a navigation bar and I added a red line on the bottom when hovering any item of the list, but I want to move that red line under the header (something like "Services"), any idea how to achieve this? I added an small sample in codepen so you can easily check the HTML and CSS code header { background-color: lightblue; padding-top: 1rem; position: sticky; top: 0; display: flex; align-items: center; justify-content: space-around; } header nav { min-width: 50%; } header nav ul { margin: 0; height: 100%; list-style: none; padding-left: 0; display: flex; align-items: center; justify-content: space-between; } header li:hover { height: 100%; border-bottom: 2px solid red; } <header> <a href="/"> <p>Whatever logo</p> </a> <nav> <ul> <li>About us</li> <li>Services</li> <li>Pricing</li> <li>Blog</li> </ul> </nav> <a href="/">CONTACT</a> </header> Link to check the code A: You can fix the header height and also fix the height of navbar items. Also, you had one issue where on hover li elements are moving. You can also fix that with always adding border with transparent color to the element, so the overall height of the element won't change on hover state. Here is the fixed CSS header { background-color: lightblue; position: sticky; display: flex; height: 60px; align-items: center; justify-content: space-around; } header nav { min-width: 50%; } header nav ul { margin: 0; height: 100%; list-style: none; padding-left: 0; display: flex; align-items: center; justify-content: space-between; height: 60px; } header li { display: flex; align-items: center; border-bottom: 2px solid transparent; height: 60px; } header li:hover { border-bottom: 2px solid red; } https://codepen.io/swarajgk/pen/JjZewPo?editors=1100 A: I think just giving height to all list elements the same as the header will work. Like this:- header { background-color: lightblue; padding-top: 1rem; height: 3rem; position: sticky; top: 0; display: flex; align-items: center; justify-content: space-around; } header nav { min-width: 50%; height : 100%; } header nav ul { margin: 0; height: 100%; list-style: none; padding-left: 0; display: flex; align-items: center; justify-content: space-between; } header li{ height: inherit; } header li:hover { border-bottom: 2px solid red; } <body> <header> <a href="/" ><p>Whatever logo</p></a> <nav> <ul> <li>About us</li> <li>Services</li> <li>Pricing</li> <li>Blog</li> </ul> </nav> <a href="/">CONTACT</a> </header> </body> A: Hope this solves the issue. header { background-color: lightblue; padding-top: 1rem; height: 3rem; position: sticky; top: 0; display: flex; align-items: center; justify-content: space-around; } header nav { min-width: 50%; height : 100%; } header nav ul { margin: 0; height: 100%; list-style: none; padding-left: 0; display: flex; align-items: center; justify-content: space-between; } header li{ height: inherit; } header li:hover { border-bottom: 2px solid red; } A: I'd suggest the following approach, with explanatory comments in the CSS: /* removing default padding and margin from all elements, and forcing the browser to use the same sizing algorithm - border-box - to calculate element sizes, including the padding and border widths in the declared size: */ *, ::before, ::after { box-sizing: border-box; padding: 0; margin: 0; } /* setting common properties for the two element groups: */ header, header nav ul { /* using display: flex layout: */ display: flex; /* forcing the flex-items within the flex parent to take the full height of that parent: */ align-items: stretch; } header { background-color: lightblue; block-size: 3em; position: sticky; justify-content: space-around; } /* using :is() to combine the two selectors header a, header li into one selector: */ header :is(a, li) { /* using grid layout: */ display: grid; /* positioning the - including text - content at the center of the element: */ place-items: center; } header nav { min-width: 50%; } header nav ul { /* the <ul> isn't a flex-item so we have to specify that we want it to take all available space on the block-axis (equivalent to 'height' in left-to-right languages such as English): */ block-size: 100%; list-style: none; justify-content: space-between; } header li { /* to prevent the jumping content: */ border-bottom: 2px solid transparent; } header li:hover { /* to style the color of the bottom border: */ border-bottom-color: red; } <header> <a href="/"> <p>Whatever logo</p> </a> <nav> <ul> <li>About us</li> <li>Services</li> <li>Pricing</li> <li>Blog</li> </ul> </nav> <a href="/">CONTACT</a> </header> JS Fiddle demo. References: align-items. display. justify-content. place-items. Bibliography: "Aligning items in a flex container," MDN. "Basic concepts of flexbox," MDN. "Box alignment in grid layout," MDN.
CSS border bottom on Navigation bar
I have a navigation bar and I added a red line on the bottom when hovering any item of the list, but I want to move that red line under the header (something like "Services"), any idea how to achieve this? I added an small sample in codepen so you can easily check the HTML and CSS code header { background-color: lightblue; padding-top: 1rem; position: sticky; top: 0; display: flex; align-items: center; justify-content: space-around; } header nav { min-width: 50%; } header nav ul { margin: 0; height: 100%; list-style: none; padding-left: 0; display: flex; align-items: center; justify-content: space-between; } header li:hover { height: 100%; border-bottom: 2px solid red; } <header> <a href="/"> <p>Whatever logo</p> </a> <nav> <ul> <li>About us</li> <li>Services</li> <li>Pricing</li> <li>Blog</li> </ul> </nav> <a href="/">CONTACT</a> </header> Link to check the code
[ "You can fix the header height and also fix the height of navbar items.\nAlso, you had one issue where on hover li elements are moving. You can also fix that with always adding border with transparent color to the element, so the overall height of the element won't change on hover state.\nHere is the fixed CSS\nheader {\n background-color: lightblue;\n position: sticky;\n display: flex;\n height: 60px;\n align-items: center;\n justify-content: space-around;\n}\n\nheader nav {\n min-width: 50%;\n}\n\nheader nav ul {\n margin: 0;\n height: 100%;\n list-style: none;\n padding-left: 0;\n display: flex;\n align-items: center;\n justify-content: space-between;\n height: 60px;\n}\n\nheader li {\n display: flex;\n align-items: center;\n border-bottom: 2px solid transparent;\n height: 60px;\n}\n\nheader li:hover {\n border-bottom: 2px solid red;\n}\n\n\nhttps://codepen.io/swarajgk/pen/JjZewPo?editors=1100\n", "I think just giving height to all list elements the same as the header will work.\nLike this:-\n\n\nheader {\n background-color: lightblue;\n padding-top: 1rem;\n height: 3rem;\n position: sticky;\n top: 0;\n display: flex;\n align-items: center;\n justify-content: space-around;\n}\n\nheader nav {\n min-width: 50%;\n height : 100%;\n}\n\nheader nav ul {\n margin: 0;\n height: 100%;\n list-style: none;\n padding-left: 0;\n display: flex;\n align-items: center;\n justify-content: space-between;\n}\n\nheader li{\n height: inherit;\n}\n\nheader li:hover {\n border-bottom: 2px solid red;\n}\n <body>\n <header>\n <a href=\"/\"\n ><p>Whatever logo</p></a>\n <nav>\n <ul>\n <li>About us</li>\n <li>Services</li>\n <li>Pricing</li>\n <li>Blog</li>\n </ul>\n </nav>\n <a href=\"/\">CONTACT</a>\n </header>\n </body>\n\n\n\n", "Hope this solves the issue.\n\n\nheader {\n background-color: lightblue;\n padding-top: 1rem;\n height: 3rem;\n position: sticky;\n top: 0;\n display: flex;\n align-items: center;\n justify-content: space-around;\n}\n\nheader nav {\n min-width: 50%;\n height : 100%;\n}\n\nheader nav ul {\n margin: 0;\n height: 100%;\n list-style: none;\n padding-left: 0;\n display: flex;\n align-items: center;\n justify-content: space-between;\n}\n\nheader li{\n height: inherit;\n}\n\nheader li:hover {\n border-bottom: 2px solid red;\n}\n\n\n\n", "I'd suggest the following approach, with explanatory comments in the CSS:\n\n\n/* removing default padding and margin from all\n elements, and forcing the browser to use the\n same sizing algorithm - border-box - to calculate\n element sizes, including the padding and border\n widths in the declared size: */\n*, ::before, ::after {\n box-sizing: border-box;\n padding: 0;\n margin: 0;\n}\n\n/* setting common properties for the two element\n groups: */\nheader,\nheader nav ul {\n /* using display: flex layout: */\n display: flex;\n /* forcing the flex-items within the flex parent\n to take the full height of that parent: */\n align-items: stretch;\n}\n\nheader {\n background-color: lightblue;\n block-size: 3em;\n position: sticky;\n justify-content: space-around;\n}\n\n/* using :is() to combine the two selectors\n header a,\n header li\n into one selector: */\nheader :is(a, li) {\n /* using grid layout: */\n display: grid;\n /* positioning the - including text - content\n at the center of the element: */\n place-items: center;\n}\n\nheader nav {\n min-width: 50%;\n}\n\nheader nav ul {\n /* the <ul> isn't a flex-item so we have to specify\n that we want it to take all available space on \n the block-axis (equivalent to 'height' in left-to-right\n languages such as English): */\n block-size: 100%;\n list-style: none;\n justify-content: space-between;\n}\n\nheader li {\n /* to prevent the jumping content: */\n border-bottom: 2px solid transparent;\n}\n\nheader li:hover {\n /* to style the color of the bottom border: */\n border-bottom-color: red;\n}\n<header>\n <a href=\"/\">\n <p>Whatever logo</p>\n </a>\n <nav>\n <ul>\n <li>About us</li>\n <li>Services</li>\n <li>Pricing</li>\n <li>Blog</li>\n </ul>\n </nav>\n <a href=\"/\">CONTACT</a>\n</header>\n\n\n\nJS Fiddle demo.\nReferences:\n\nalign-items.\ndisplay.\njustify-content.\nplace-items.\n\nBibliography:\n\n\"Aligning items in a flex container,\" MDN.\n\"Basic concepts of flexbox,\" MDN.\n\"Box alignment in grid layout,\" MDN.\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074675862_css_html.txt
Q: How can we resolve JAVA command line error (FatalError: Program Stack empty)? I am new to java and I ran below command in the command line. C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB But I am getting below error. could someone please suggest. RES Cobol 2 Java alpha1.9(08/05/2010) - COPYRIGHT 2009 C:\Users\kranji1\Desktop\cob\F9342ED0.COB Parsing Cobol started for: C:\Users\kranji1\Desktop\cob\F9342ED0.COB Translation to Java started. The java classes are under the folder: C:\Users\kranji1\Desktop\New\RES\benchmark Classes from translation of programs reside in the package: cobolprogramclasses Classes from translation of data levels reside in the package: coboldataclasses FatalError: Program Stack empty Done. I ran below command on the command line. C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB The command convert cobol code to java. Could you please suggest any other ways can we migrate COBOL to Java. A: This looks like the Java code is running correctly, but it's hitting an internal error. Maybe it couldn't find the directory with the Java sources. C:\Users\kranji1\Desktop\cob\F9342ED0.COB Translation to Java started. The java classes are under the folder: C:\Users\kranji1\Desktop\New\RES\benchmark Classes from translation of programs reside in the package: cobolprogramclasses Classes from translation of data levels reside in the package: coboldataclasses FatalError: Program Stack empty Done. Unless you have the sources to that JAR file, and post them in your question, you'll have to talk to whoever provided the Cobol-to-Java translation JAR. A: There is a COBOL to Java tool in the translators.zip at: https://agilemde.co.uk/ It covers mainly COBOL 74 and the main forms of statement, and can be extended/customised.
How can we resolve JAVA command line error (FatalError: Program Stack empty)?
I am new to java and I ran below command in the command line. C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB But I am getting below error. could someone please suggest. RES Cobol 2 Java alpha1.9(08/05/2010) - COPYRIGHT 2009 C:\Users\kranji1\Desktop\cob\F9342ED0.COB Parsing Cobol started for: C:\Users\kranji1\Desktop\cob\F9342ED0.COB Translation to Java started. The java classes are under the folder: C:\Users\kranji1\Desktop\New\RES\benchmark Classes from translation of programs reside in the package: cobolprogramclasses Classes from translation of data levels reside in the package: coboldataclasses FatalError: Program Stack empty Done. I ran below command on the command line. C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB C:\Users\kranji1\Desktop\New\RES\benchmark>Java -jar C:\Users\kranji1\Desktop\New\RES\RES.jar C:\Users\kranji1\Desktop\cob\F9342ED0.COB The command convert cobol code to java. Could you please suggest any other ways can we migrate COBOL to Java.
[ "This looks like the Java code is running correctly, but it's hitting an internal error. Maybe it couldn't find the directory with the Java sources.\nC:\\Users\\kranji1\\Desktop\\cob\\F9342ED0.COB Translation to Java started. The java classes are under the folder: C:\\Users\\kranji1\\Desktop\\New\\RES\\benchmark Classes from translation of programs reside in the package: cobolprogramclasses Classes from translation of data levels reside in the package: coboldataclasses FatalError: Program Stack empty Done.\n\nUnless you have the sources to that JAR file, and post them in your question, you'll have to talk to whoever provided the Cobol-to-Java translation JAR.\n", "There is a COBOL to Java tool in the translators.zip at: https://agilemde.co.uk/\nIt covers mainly COBOL 74 and the main forms of statement, and can be extended/customised.\n" ]
[ 0, 0 ]
[]
[]
[ "java" ]
stackoverflow_0056874967_java.txt
Q: Can I safely ignore this md5_file() "access denied" error notice? I am ensuring my files are written before program moves on like this: (pseudocodish) ... clearstatcache(); $oldHash=md5_file($file); if (file_put_contentes($file,$string)) { $written=false; while(!$written) { clearstatcache(); $newHash=md5_file($file); if($newHash!=$oldHash) { $written=true;//weee } } } Well, I'm getting a "Notice: md5_file(): read of 8192 bytes failed with errno=13 Permission denied in ..." Which I think is the "system" still writing the file and not letting anyone mess with it. (Is it?). The question though: Can I (safely) just ignore this with a @? Another question, do I need the two clearstatcache's? Thank you all in advance. I also want to congratulate you if you know such things. Scares me a bit. PS. I was wondering if the "permission denied" was a bug of mine, something else I did locking that file - though unlikely... - and not the system.
Can I safely ignore this md5_file() "access denied" error notice?
I am ensuring my files are written before program moves on like this: (pseudocodish) ... clearstatcache(); $oldHash=md5_file($file); if (file_put_contentes($file,$string)) { $written=false; while(!$written) { clearstatcache(); $newHash=md5_file($file); if($newHash!=$oldHash) { $written=true;//weee } } } Well, I'm getting a "Notice: md5_file(): read of 8192 bytes failed with errno=13 Permission denied in ..." Which I think is the "system" still writing the file and not letting anyone mess with it. (Is it?). The question though: Can I (safely) just ignore this with a @? Another question, do I need the two clearstatcache's? Thank you all in advance. I also want to congratulate you if you know such things. Scares me a bit. PS. I was wondering if the "permission denied" was a bug of mine, something else I did locking that file - though unlikely... - and not the system.
[]
[]
[ "This is a stupid question.\n\nI am ensuring my files are written before program moves on like this:\n\n@Tangentially Perpendicular correctly points out:\n\nIf you have some reason not to trust PHP and your operating system to write the file reliably, perhaps your effort should go into fixing that.\n\nThere's absolutely no reason to not trust file_put_contents to write the file to disk.\n\nWhich I think is the \"system\" still writing the file and not letting anyone mess with it. (Is it?).\n\nMost definitely that's not the case. Depending on the OS (e.g. linux) multiple concurrent writes to the same file won't result in a permission denied error.\nIf you want to make sure that writes are committed to disk you should look into fsync.\nAlso, clearstatcache has NO effect on md5_file.\n\nI was wondering if the \"permission denied\" was a bug of mine, something else I did locking that file - though unlikely... - and not the system.\n\nMost locking mechanism rely on an advisory basis. This means a file lock isn't enforced by the kernel (you can write to a file that is locked by another program). It's ADVISORY so programs accessing that file must use some kind of interface (flock) in order to prevent two programs accessing (writing) the same file at the same time.\nI suggest you take a few steps back and to try to formulate what you're really trying to achieve here. See X/Y Problem.\n" ]
[ -1 ]
[ "md5_file", "php" ]
stackoverflow_0074677402_md5_file_php.txt
Q: CIDC with BitBucket, Docker Image and Azure EDITED I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below: Dockerfile # Docker Operating System FROM python:3-slim-buster # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 #App folder on Slim OS WORKDIR /app # Install pip requirements COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip pip install -r requirements.txt #Copy Files to App folder COPY . /app docker-compose.yml version: '3.4' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 ports: - 8000:8000 My code is on BitBucket and I have a pipeline file as follows: bitbucket-pipelines.yml image: atlassian/default-image:2 pipelines: branches: master: - step: name: Build And Publish To Azure services: - docker script: - docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io - docker build -t xxx.azurecr.io . - docker push xxx.azurecr.io With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket. What did I not do correctly? Thanks. The Edit Changes in docker-compose.yml and bitbucket-pipeline.yml docker-compose.yml version: '3.4' services: web: build: . image: xx.azurecr.io/myticket container_name: xx command: python manage.py runserver 0.0.0.0:80 ports: - 80:80 bitbucket-pipelines.yml image: atlassian/default-image:2 pipelines: branches: master: - step: name: Build And Publish To Azure services: - docker script: - docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io - docker build -t xx.azurecr.io/xx . - docker push xx.azurecr.io/xx A: You didnt specify CMD or ENTRYPOINT in your dockerfile. There are stages when building a dockerfile Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up. Both ENTRYPOINT and CMD are essential for building and running Dockerfiles. for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile.. Change your files accordingly and try again. Dockerfile # Docker Operating System FROM python:3-slim-buster # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 #App folder on Slim OS WORKDIR /app # Install pip requirements COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip pip install -r requirements.txt #Copy Files to App folder COPY . /app # Execute commands inside the container CMD manage.py runserver 0.0.0.0:8000 Check you are able to build and run the image by going to its directory and running docker build -t app . docker run -d -p 80:80 app docker ps See if your container is running. Next Update the image property in the docker-compose file. Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front. Change the ports mapping to 80:80. Save the file. The updated file should look similar to the following: docker-compose.yml Copy version: '3' services: foo: build: . image: foo.azurecr.io/atlassian/default-image:2 container_name: foo ports: - "80:80" By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances. More in documentation
CIDC with BitBucket, Docker Image and Azure
EDITED I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below: Dockerfile # Docker Operating System FROM python:3-slim-buster # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 #App folder on Slim OS WORKDIR /app # Install pip requirements COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip pip install -r requirements.txt #Copy Files to App folder COPY . /app docker-compose.yml version: '3.4' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 ports: - 8000:8000 My code is on BitBucket and I have a pipeline file as follows: bitbucket-pipelines.yml image: atlassian/default-image:2 pipelines: branches: master: - step: name: Build And Publish To Azure services: - docker script: - docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io - docker build -t xxx.azurecr.io . - docker push xxx.azurecr.io With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket. What did I not do correctly? Thanks. The Edit Changes in docker-compose.yml and bitbucket-pipeline.yml docker-compose.yml version: '3.4' services: web: build: . image: xx.azurecr.io/myticket container_name: xx command: python manage.py runserver 0.0.0.0:80 ports: - 80:80 bitbucket-pipelines.yml image: atlassian/default-image:2 pipelines: branches: master: - step: name: Build And Publish To Azure services: - docker script: - docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io - docker build -t xx.azurecr.io/xx . - docker push xx.azurecr.io/xx
[ "You didnt specify CMD or ENTRYPOINT in your dockerfile.\nThere are stages when building a dockerfile\nFirstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.\nBoth ENTRYPOINT and CMD are essential for building and running Dockerfiles.\nfor it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..\nChange your files accordingly and try again.\nDockerfile\n# Docker Operating System\nFROM python:3-slim-buster\n\n# Keeps Python from generating .pyc files in the container\nENV PYTHONDONTWRITEBYTECODE=1\n\n# Turns off buffering for easier container logging\nENV PYTHONUNBUFFERED=1\n\n#App folder on Slim OS\nWORKDIR /app\n\n# Install pip requirements\nCOPY requirements.txt requirements.txt\nRUN python -m pip install --upgrade pip pip install -r requirements.txt\n\n#Copy Files to App folder\nCOPY . /app\n\n# Execute commands inside the container\nCMD manage.py runserver 0.0.0.0:8000\n\nCheck you are able to build and run the image by going to its directory and running\ndocker build -t app .\ndocker run -d -p 80:80 app\ndocker ps\nSee if your container is running.\nNext\nUpdate the image property in the docker-compose file.\nPrefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.\nChange the ports mapping to 80:80. Save the file.\nThe updated file should look similar to the following:\ndocker-compose.yml\nCopy\nversion: '3'\nservices:\n foo:\n build: .\n image: foo.azurecr.io/atlassian/default-image:2\n container_name: foo\n ports:\n - \"80:80\"\n\nBy making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.\nMore in documentation\n" ]
[ 0 ]
[]
[]
[ "azure", "bitbucket", "django", "docker" ]
stackoverflow_0074677827_azure_bitbucket_django_docker.txt
Q: Getting realtime notification whenever a new friend request is received MERN Stack using Socket.io I am trying to add a functionality in my web app where whenever a new friend request is received in the database (mongodb) then i get a notification through from backend (Node.js) to my frontend (React.js) Now i researched about this functionality and get to know about socket.io but the problem is the solutions i found which were using socket.io were kind of a brute force according to me , In those solutions they were querying the database inside the socket.emit(), Now according to me if I keep querying the database every 4-5 seconds is it a good approach to do that doesn't it put load on database? What is the right way to do this? What i have tried so far is finding a better solution than querying the database again and again till i get an update. But i had no luck .. A: The best approach is to connect frontend with backend using websocket/socket.io and as soon as you add a new object the server should push the data to frontend. You don't have to run a database query every 4-5 second. Write a server push event in your data.save() function. So as soon as you create a new object, the backend sends data to frontend.
Getting realtime notification whenever a new friend request is received MERN Stack using Socket.io
I am trying to add a functionality in my web app where whenever a new friend request is received in the database (mongodb) then i get a notification through from backend (Node.js) to my frontend (React.js) Now i researched about this functionality and get to know about socket.io but the problem is the solutions i found which were using socket.io were kind of a brute force according to me , In those solutions they were querying the database inside the socket.emit(), Now according to me if I keep querying the database every 4-5 seconds is it a good approach to do that doesn't it put load on database? What is the right way to do this? What i have tried so far is finding a better solution than querying the database again and again till i get an update. But i had no luck ..
[ "The best approach is to connect frontend with backend using websocket/socket.io and as soon as you add a new object the server should push the data to frontend. You don't have to run a database query every 4-5 second. Write a server push event in your data.save() function. So as soon as you create a new object, the backend sends data to frontend.\n" ]
[ 0 ]
[]
[]
[ "express", "mongodb", "node.js", "reactjs", "socket.io" ]
stackoverflow_0074677614_express_mongodb_node.js_reactjs_socket.io.txt
Q: Recyclerlistview error : "Cannot call a class as a function" Can anyone help me? I have this problem in React Native :( Recyclerlistview error : "Cannot call a class as a function" https://snack.expo.dev/@mmdrezaaramideh/courageous-truffle I tried every way I could think of.do You have a suggestion? A: Your error is here line 40, replace: dim.width = Dimensions('window').width; with dim.width = Dimensions.get('window').width;
Recyclerlistview error : "Cannot call a class as a function"
Can anyone help me? I have this problem in React Native :( Recyclerlistview error : "Cannot call a class as a function" https://snack.expo.dev/@mmdrezaaramideh/courageous-truffle I tried every way I could think of.do You have a suggestion?
[ "Your error is here line 40, replace:\ndim.width = Dimensions('window').width;\n\nwith\ndim.width = Dimensions.get('window').width;\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "react_native", "reactjs" ]
stackoverflow_0074666363_javascript_react_native_reactjs.txt
Q: Checking the type of variable in Jinja2 I want to check the type of variable in Jinja2. If it is type of variable is dictionary then I have to print some text in the paragraph and if it's not dict then I have to print some other values. What I tried here is {% if {{result}} is dict %} <tr> <td> <p> The details are not here </p> </td> </tr> {% else %} {% for each_value in result %} <tr> <td>each_value.student_name</td> </tr> {% endfor %} {% endif %} The result I get is two different ways one is of dict type I.result={'student_name':'a','student_id':1,'student_email':'my_name@gmail.com'} the another format of result is II.result=[{'student_name':'b','student_id':2,'student_email':'my_nameb@gmail.com','time':[{'st':1,'et':2},{'st':3,'et':4}]}] Expected result If I get the format 'I' then the if loop should get execute. If I get the format 'II' then the else loop should get execute. Actual result jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}' A: You should replace {% if {{result}} is dict %} with {% if result is mapping %}. Reference A: Alternative, and possibly better solutions: {% if result.__class__.__name__ == "dict" %} or add isinstance to Jinja context, and then {% if isinstance(result, dict) %}
Checking the type of variable in Jinja2
I want to check the type of variable in Jinja2. If it is type of variable is dictionary then I have to print some text in the paragraph and if it's not dict then I have to print some other values. What I tried here is {% if {{result}} is dict %} <tr> <td> <p> The details are not here </p> </td> </tr> {% else %} {% for each_value in result %} <tr> <td>each_value.student_name</td> </tr> {% endfor %} {% endif %} The result I get is two different ways one is of dict type I.result={'student_name':'a','student_id':1,'student_email':'my_name@gmail.com'} the another format of result is II.result=[{'student_name':'b','student_id':2,'student_email':'my_nameb@gmail.com','time':[{'st':1,'et':2},{'st':3,'et':4}]}] Expected result If I get the format 'I' then the if loop should get execute. If I get the format 'II' then the else loop should get execute. Actual result jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}'
[ "You should replace {% if {{result}} is dict %} with {% if result is mapping %}.\nReference\n", "Alternative, and possibly better solutions:\n{% if result.__class__.__name__ == \"dict\" %}\nor add isinstance to Jinja context, and then\n{% if isinstance(result, dict) %}\n" ]
[ 1, 0 ]
[]
[]
[ "jinja2", "python" ]
stackoverflow_0058264079_jinja2_python.txt
Q: Linear Programming, max function (if statement) I have a value x, which is a combination of decision variables. I need to calculate a cost, which only triggers if x > 100. So cost = MAX(x - 100, 0) * 20. Is there any way to do this in linear programming? I've tried creating two binary variables (y1 & y2), in which y1 = 1 when x <= 100 & y2 = 1 when x > 100 & y1 + y2 = 1, from this website - https://uk.mathworks.com/matlabcentral/answers/693740-linear-programming-with-conditional-constraints. However, my excel solver is still giving non-linearity complaints... Any advice on how I can fix this? A: It is not possible to use linear programming to solve a problem with a cost function that has a piecewise linear structure, like the one you have described. This is because linear programming only allows for linear objective functions and constraints, and the piecewise nature of your cost function means that it is not a linear function. One way to approach this problem would be to use a mixed integer linear programming (MILP) solver, which allows for integer variables in the objective function and constraints. In your case, you could use a binary variable to represent whether or not the cost function should be applied, and then use this binary variable to control the application of the cost function in the objective function. Here is an example of how this could work: minimize cost = 20 * x * y subject to: x <= 100 * (1 - y) // x must be <= 100 if y is 0 x >= 100 * y // x must be >= 100 if y is 1 y in {0, 1} // y must be 0 or 1 In this example, the binary variable y is used to control whether or not the cost function is applied to x. When y is 0, the cost function is not applied, and the value of x is allowed to be anything less than or equal to 100. When y is 1, the cost function is applied, and the value of x must be greater than or equal to 100. You can find more information about mixed integer linear programming and how to solve MILP problems using a solver like Gurobi or CPLEX in the documentation for those solvers. A: It sounds like you are trying to model a conditional cost in a linear programming problem, which can be a challenging task. One approach to addressing this issue is to use binary variables to represent the different conditions and to use them to create constraints that ensure that only the appropriate cost is applied in each case. For example, in your case, you could create two binary variables, y1 and y2, to represent the two conditions (x <= 100 and x > 100) and create constraints to ensure that only one of these variables is active at a time. You could then use these variables to create the appropriate cost function for each case, as follows: cost = y1 * x + y2 * (MAX(x - 100, 0) * 20) This function would ensure that the appropriate cost is applied in each case, based on the value of the binary variables y1 and y2. You could then add constraints to ensure that y1 and y2 are mutually exclusive and that they sum to 1, as follows: y1 + y2 = 1 y1, y2 binary These constraints would ensure that only one of the two binary variables is active at a time, and they would help to enforce the desired behavior in your linear programming model.
Linear Programming, max function (if statement)
I have a value x, which is a combination of decision variables. I need to calculate a cost, which only triggers if x > 100. So cost = MAX(x - 100, 0) * 20. Is there any way to do this in linear programming? I've tried creating two binary variables (y1 & y2), in which y1 = 1 when x <= 100 & y2 = 1 when x > 100 & y1 + y2 = 1, from this website - https://uk.mathworks.com/matlabcentral/answers/693740-linear-programming-with-conditional-constraints. However, my excel solver is still giving non-linearity complaints... Any advice on how I can fix this?
[ "It is not possible to use linear programming to solve a problem with a cost function that has a piecewise linear structure, like the one you have described. This is because linear programming only allows for linear objective functions and constraints, and the piecewise nature of your cost function means that it is not a linear function.\nOne way to approach this problem would be to use a mixed integer linear programming (MILP) solver, which allows for integer variables in the objective function and constraints. In your case, you could use a binary variable to represent whether or not the cost function should be applied, and then use this binary variable to control the application of the cost function in the objective function. Here is an example of how this could work:\nminimize cost = 20 * x * y\n\nsubject to:\n\nx <= 100 * (1 - y) // x must be <= 100 if y is 0\nx >= 100 * y // x must be >= 100 if y is 1\ny in {0, 1} // y must be 0 or 1\n\n\nIn this example, the binary variable y is used to control whether or not the cost function is applied to x. When y is 0, the cost function is not applied, and the value of x is allowed to be anything less than or equal to 100. When y is 1, the cost function is applied, and the value of x must be greater than or equal to 100.\nYou can find more information about mixed integer linear programming and how to solve MILP problems using a solver like Gurobi or CPLEX in the documentation for those solvers.\n", "It sounds like you are trying to model a conditional cost in a linear programming problem, which can be a challenging task. One approach to addressing this issue is to use binary variables to represent the different conditions and to use them to create constraints that ensure that only the appropriate cost is applied in each case.\nFor example, in your case, you could create two binary variables, y1 and y2, to represent the two conditions (x <= 100 and x > 100) and create constraints to ensure that only one of these variables is active at a time. You could then use these variables to create the appropriate cost function for each case, as follows:\ncost = y1 * x + y2 * (MAX(x - 100, 0) * 20)\nThis function would ensure that the appropriate cost is applied in each case, based on the value of the binary variables y1 and y2. You could then add constraints to ensure that y1 and y2 are mutually exclusive and that they sum to 1, as follows:\ny1 + y2 = 1\ny1, y2 binary\nThese constraints would ensure that only one of the two binary variables is active at a time, and they would help to enforce the desired behavior in your linear programming model.\n" ]
[ 1, 0 ]
[]
[]
[ "linear_programming" ]
stackoverflow_0074677849_linear_programming.txt
Q: How do I make this onClick function work for multiple dropdown menus? I've never tried Javascript before and looked around, but the tutorials I've found would take me weeks to figure out (attention/focus issues + I don't even know what words I want to search for) and none of the solutions I've searched for solved it, and I don't know enough to extrapolate it from other answers. Can someone give me an example of this code (from w3School) extended to also toggle more dropdown menus? It has to be usable with keyboard like this one is. Currently it's only handling the menu with an ID of "dropperso" and can open the Personal menu, I need the "openMenu" function to also react to the ID "dropsites" and be able to open the Other Sites menu. A note that the button and affected ID-having div are siblings. No JQuery please. JS: function openMenu() { document.getElementById("dropperso").classList.toggle("dropopen"); } HTML: <div class="dropdown"> <button onclick="openMenu()" class="drophover">Other Sites</button> <div id="dropsites" class="dropdown-content"> A link </div> </div> <div class="dropdown"> <button onclick="openMenu()" class="drophover">Personal</button> <div id="dropperso" class="dropdown-content" style="right: 0;"> A link A link </div> </div> All that the .dropopen css class does is change the display of .dropdown-content from none to block. I tried to search for my specific problem and all I found was either way beyond my ability to understand, "use JQuery" (I'm limited and can't use JQuery), or "use this other code (that doesn't work for mine)". It works if I just copy the whole thing and make one function for each menu, but I get the feeling that's kinda bad spaghetti coding, and I can't compress this on my own without an example that works to learn from. I'd be VERY grateful if you could solve that for me so I can use that later, and even MORE grateful if you could either explain how you made it work or link to the specific parts of documentation that explain what you're using. A: function openMenu(id) { document.getElementById(id).classList.toggle("dropopen"); } <button onclick="openMenu('dropsites')" class="drophover">Personal</button> <button onclick="openMenu('dropperso')" class="drophover">Personal</button> A: If you want to toggle them all with one click use querySelectors to get all the dropdown menus and toggle each of them like this : const dropdownMenus = document.querySelectorAll(".dropdown-content") for(const menu of dropdownMenus){ menu.classList.toggle("dropopen") } but if you want to toggle each of them with same function and not writing a function for each menu you can do like this : JS : function openMenu(id) { document.getElementById(id).classList.toggle("dropopen"); } HTML : <div class="dropdown"> <button onclick="openMenu('dropsites')" class="drophover">Other Sites</button> <div id="dropsites" class="dropdown-content"> A link </div> </div> <div class="dropdown"> <button onclick="openMenu('dropperso')" class="drophover">Personal</button> <div id="dropperso" class="dropdown-content" style="right: 0;"> A link A link </div> </div> A: <div class="drop-down flex-row-AI-center" data-dropdown> <button class="drop-btn" data-dropdownBtn>Categories</button> <div class="dropdown-content flex-col" data-dropdown-content> <a href="#action">Action </a> <a href="#adventure">Adventure</a> <a href="#anime">Anime</a> <a href="#comedy">Comedy</a> <a href="#thriller">Thriller</a> <a href="#fantasy">Fantasy</a> </div> </div> </div> You have to give all the dropdown same ids/class/dataset-attributes function toggleDropDown(e) { const isDropdownBtn = e.target.matches('[data-dropdownBtn]'); //as long as user clicking inside of dropdown it won't close if (!isDropdownBtn && e.target.closest('[data-dropdown]') != null) return; let currDropdown; if (isDropdownBtn) { currDropdown = e.target.closest('[data-dropdown]'); currDropdown.classList.add('active'); } document.querySelectorAll('[data-dropdown-content].active').forEach(dropdowm => { if(currDropdown === dropdowm) return dropdowm.classList.remove('active') }) }
How do I make this onClick function work for multiple dropdown menus?
I've never tried Javascript before and looked around, but the tutorials I've found would take me weeks to figure out (attention/focus issues + I don't even know what words I want to search for) and none of the solutions I've searched for solved it, and I don't know enough to extrapolate it from other answers. Can someone give me an example of this code (from w3School) extended to also toggle more dropdown menus? It has to be usable with keyboard like this one is. Currently it's only handling the menu with an ID of "dropperso" and can open the Personal menu, I need the "openMenu" function to also react to the ID "dropsites" and be able to open the Other Sites menu. A note that the button and affected ID-having div are siblings. No JQuery please. JS: function openMenu() { document.getElementById("dropperso").classList.toggle("dropopen"); } HTML: <div class="dropdown"> <button onclick="openMenu()" class="drophover">Other Sites</button> <div id="dropsites" class="dropdown-content"> A link </div> </div> <div class="dropdown"> <button onclick="openMenu()" class="drophover">Personal</button> <div id="dropperso" class="dropdown-content" style="right: 0;"> A link A link </div> </div> All that the .dropopen css class does is change the display of .dropdown-content from none to block. I tried to search for my specific problem and all I found was either way beyond my ability to understand, "use JQuery" (I'm limited and can't use JQuery), or "use this other code (that doesn't work for mine)". It works if I just copy the whole thing and make one function for each menu, but I get the feeling that's kinda bad spaghetti coding, and I can't compress this on my own without an example that works to learn from. I'd be VERY grateful if you could solve that for me so I can use that later, and even MORE grateful if you could either explain how you made it work or link to the specific parts of documentation that explain what you're using.
[ "function openMenu(id) {\n document.getElementById(id).classList.toggle(\"dropopen\");\n}\n\n<button onclick=\"openMenu('dropsites')\" class=\"drophover\">Personal</button>\n\n<button onclick=\"openMenu('dropperso')\" class=\"drophover\">Personal</button>\n\n", "If you want to toggle them all with one click use querySelectors to get all the dropdown menus and toggle each of them like this :\nconst dropdownMenus = document.querySelectorAll(\".dropdown-content\")\n\nfor(const menu of dropdownMenus){\n menu.classList.toggle(\"dropopen\")\n}\n\nbut if you want to toggle each of them with same function and not writing a function for each menu you can do like this :\nJS :\nfunction openMenu(id) {\n document.getElementById(id).classList.toggle(\"dropopen\");\n}\n\nHTML :\n <div class=\"dropdown\">\n <button onclick=\"openMenu('dropsites')\" class=\"drophover\">Other Sites</button>\n <div id=\"dropsites\" class=\"dropdown-content\">\n A link\n </div>\n </div>\n <div class=\"dropdown\">\n <button onclick=\"openMenu('dropperso')\" class=\"drophover\">Personal</button>\n <div id=\"dropperso\" class=\"dropdown-content\" style=\"right: 0;\">\n A link\n A link\n </div>\n </div>\n\n", " <div class=\"drop-down flex-row-AI-center\" data-dropdown>\n <button class=\"drop-btn\" data-dropdownBtn>Categories</button>\n <div class=\"dropdown-content flex-col\" data-dropdown-content>\n <a href=\"#action\">Action </a>\n <a href=\"#adventure\">Adventure</a>\n <a href=\"#anime\">Anime</a>\n <a href=\"#comedy\">Comedy</a>\n <a href=\"#thriller\">Thriller</a>\n <a href=\"#fantasy\">Fantasy</a>\n </div>\n </div>\n</div>\n\nYou have to give all the dropdown same ids/class/dataset-attributes\n function toggleDropDown(e) {\nconst isDropdownBtn = e.target.matches('[data-dropdownBtn]');\n\n//as long as user clicking inside of dropdown it won't close\nif (!isDropdownBtn && e.target.closest('[data-dropdown]') != null) return;\n\nlet currDropdown;\n\nif (isDropdownBtn) {\ncurrDropdown = e.target.closest('[data-dropdown]');\ncurrDropdown.classList.add('active');\n}\n\ndocument.querySelectorAll('[data-dropdown-content].active').forEach(dropdowm => {\nif(currDropdown === dropdowm) return\ndropdowm.classList.remove('active')\n })\n}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "drop_down_menu", "html", "javascript" ]
stackoverflow_0074677621_drop_down_menu_html_javascript.txt
Q: Webpack: cannot resolve module 'file-loader' When I try to build SASS file with webpack, I got the following error: Module not found: Error:Cannot resolve module 'file-loader' note that this issue only happen when i try to load background image using relative path. this Work fine: background:url(http://localhost:8080/images/magnifier.png); this cause the issue: background:url(../images/magnifier.png); and this is my project structure images styles webpack.config.js and this is my webpack file: var path = require('path'); module.exports = { entry: { build: [ './scripts/app.jsx', 'webpack-dev-server/client?http://localhost:8080', 'webpack/hot/only-dev-server' ] }, output: { path: path.join(__dirname, 'public'), publicPath: 'http://localhost:8080/', filename: 'public/[name].js' }, module: { loaders: [ {test: /\.jsx?$/, loaders: ['react-hot', 'babel?stage=0'], exclude: /node_modules/}, {test: /\.scss$/, loaders: ['style', 'css', 'sass']}, {test: /\.(png|jpg)$/, loader: 'file-loader'}, {test: /\.(ttf|eot|svg|woff(2)?)(\?[a-z0-9]+)?$/, loader: 'file-loader'} ] }, resolve: { extensions: ['', '.js', '.jsx', '.scss', '.eot', '.ttf', '.svg', '.woff'], modulesDirectories: ['node_modules', 'scripts', 'images', 'fonts'] } }; A: As @silvenon said in his comment: Do you have file-loader installed? yes file-loader was installed but broken, and my issue has been solved by re-installing it. npm install --save-dev file-loader A: You may face a very similar issue if you are using url-loader with the limit configuration defined. As the documentation states, if the resource you are trying to load exceeds this limit, then file-loader will be used as fallback. Therefore, if you do not have file-loader installed, an error will prompt. To fix this error, set this limit to a bigger value or do not define it at all. { test: /\.(jpg|png|svg)$/, use: { loader: 'url-loader', options: { limit: 50000, // make sure this number is big enough to load your resource, or do not define it at all. } } } A: I has the exactly same issue and the following fixed it: loader: require.resolve("file-loader") + "?name=../[path][name].[ext]" A: Thanks for this - this was the final piece to get Bootstrap, d3js, Jquery, base64 inline images and my own badly written JS to play with webpack. To answer the question above and the solution to getting around the problematic Module not found: Error: Cannot resolve module 'url' When compiling bootstrap fonts was { test: /glyphicons-halflings-regular\.(woff2|woff|svg|ttf|eot)$/, loader:require.resolve("url-loader") + "?name=../[path][name].[ext]" } Thanks! A: If you are facing this issue while running jest, then add this in moduleNameMapper "ace-builds": "<rootDir>/node_modules/ace-builds" A: Error - ./node_modules/@fortawesome/fontawesome-free/css/all.min.cssdisabled. Error: Module not found: Can't resolve 'url-loader' Fixed by installing url-loader, ex: run 'npm install url-loader --save-dev'
Webpack: cannot resolve module 'file-loader'
When I try to build SASS file with webpack, I got the following error: Module not found: Error:Cannot resolve module 'file-loader' note that this issue only happen when i try to load background image using relative path. this Work fine: background:url(http://localhost:8080/images/magnifier.png); this cause the issue: background:url(../images/magnifier.png); and this is my project structure images styles webpack.config.js and this is my webpack file: var path = require('path'); module.exports = { entry: { build: [ './scripts/app.jsx', 'webpack-dev-server/client?http://localhost:8080', 'webpack/hot/only-dev-server' ] }, output: { path: path.join(__dirname, 'public'), publicPath: 'http://localhost:8080/', filename: 'public/[name].js' }, module: { loaders: [ {test: /\.jsx?$/, loaders: ['react-hot', 'babel?stage=0'], exclude: /node_modules/}, {test: /\.scss$/, loaders: ['style', 'css', 'sass']}, {test: /\.(png|jpg)$/, loader: 'file-loader'}, {test: /\.(ttf|eot|svg|woff(2)?)(\?[a-z0-9]+)?$/, loader: 'file-loader'} ] }, resolve: { extensions: ['', '.js', '.jsx', '.scss', '.eot', '.ttf', '.svg', '.woff'], modulesDirectories: ['node_modules', 'scripts', 'images', 'fonts'] } };
[ "As @silvenon said in his comment: \n\nDo you have file-loader installed?\n\nyes file-loader was installed but broken, and my issue has been solved by re-installing it.\nnpm install --save-dev file-loader\n", "You may face a very similar issue if you are using url-loader with the limit configuration defined. As the documentation states, if the resource you are trying to load exceeds this limit, then file-loader will be used as fallback. Therefore, if you do not have file-loader installed, an error will prompt. To fix this error, set this limit to a bigger value or do not define it at all. \n {\n test: /\\.(jpg|png|svg)$/,\n use: {\n loader: 'url-loader',\n options: {\n limit: 50000, // make sure this number is big enough to load your resource, or do not define it at all.\n }\n }\n }\n\n", "I has the exactly same issue and the following fixed it:\nloader: require.resolve(\"file-loader\") + \"?name=../[path][name].[ext]\"\n\n", "Thanks for this - this was the final piece to get\n Bootstrap, d3js, Jquery, base64 inline images and my own badly written JS to play with webpack.\nTo answer the question above and the solution to getting around the problematic\nModule not found: Error: Cannot resolve module 'url'\nWhen compiling bootstrap fonts was\n{ \ntest: /glyphicons-halflings-regular\\.(woff2|woff|svg|ttf|eot)$/,\nloader:require.resolve(\"url-loader\") + \"?name=../[path][name].[ext]\"\n}\n\nThanks!\n", "If you are facing this issue while running jest, then add this in moduleNameMapper\n\"ace-builds\": \"<rootDir>/node_modules/ace-builds\"\n\n", "Error - ./node_modules/@fortawesome/fontawesome-free/css/all.min.cssdisabled.\n Error: Module not found: Can't resolve 'url-loader' \n\nFixed by installing url-loader, ex:\nrun 'npm install url-loader --save-dev'\n" ]
[ 84, 6, 4, 1, 0, 0 ]
[]
[]
[ "sass", "webpack" ]
stackoverflow_0034352929_sass_webpack.txt
Q: I'm doing some basis conditional exercises, and I don't know what the % numbers mean in this code currentYear = int(input('Enter the year: ')) month = int(input('Enter the month: ')) if ((currentYear % 4) == 0 and (currentYear % 100) != 0 or (currentYear % 400) ==0): print('Leap Year') I have no idea what the % numbers in the brackets with the currentYear means. I gather it has something to do with leap years, but how does it become %4, %100 or %400? I don't know what this is all about to be honest... A: The % symbol in Python is called the Modulo Operator. It returns the remainder of dividing the left hand operand by right hand operand. It's used to get the remainder of a division problem. So 100 % 5 == 0 or 100 % 3 == 1 ---> Remainder equals 1
I'm doing some basis conditional exercises, and I don't know what the % numbers mean in this code
currentYear = int(input('Enter the year: ')) month = int(input('Enter the month: ')) if ((currentYear % 4) == 0 and (currentYear % 100) != 0 or (currentYear % 400) ==0): print('Leap Year') I have no idea what the % numbers in the brackets with the currentYear means. I gather it has something to do with leap years, but how does it become %4, %100 or %400? I don't know what this is all about to be honest...
[ "The % symbol in Python is called the Modulo Operator. It returns the remainder of dividing the left hand operand by right hand operand. It's used to get the remainder of a division problem.\nSo 100 % 5 == 0\nor\n100 % 3 == 1 ---> Remainder equals 1\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074677867_python.txt
Q: Django 2.0 website running on a Django 4.0 backend I am using an old version of Windows, windows 7 to be precise and it seems to only be compatible with Python 3.4 which supports Django 2.0 but heroku doesn't support it anymore So I want to know if I can manually edit the requirements to Django 4.0 and the required Python version in github. I haven't yet tried anything as I am new to this A: Consider using docker to run your application: https://github.com/docker/awesome-compose/tree/master/official-documentation-samples/django/ This would allow you to set your python and Django version.
Django 2.0 website running on a Django 4.0 backend
I am using an old version of Windows, windows 7 to be precise and it seems to only be compatible with Python 3.4 which supports Django 2.0 but heroku doesn't support it anymore So I want to know if I can manually edit the requirements to Django 4.0 and the required Python version in github. I haven't yet tried anything as I am new to this
[ "Consider using docker to run your application:\nhttps://github.com/docker/awesome-compose/tree/master/official-documentation-samples/django/\nThis would allow you to set your python and Django version.\n" ]
[ 0 ]
[]
[]
[ "django", "github", "heroku", "python_3.x" ]
stackoverflow_0074677769_django_github_heroku_python_3.x.txt
Q: contextMenuConfigurationForItemsAtIndexPaths for UITableView? We know that iOS 13 adds the ContextMenu function to UITableView, we can use the following method to display the context menu of UITableViewCell: - (nullable UIContextMenuConfiguration *)tableView:(UITableView *)tableView contextMenuConfigurationForRowAtIndexPath:(NSIndexPath *)indexPath point:(CGPoint)point But how can we show a menu for multiple selected rows? Like the "Files" app? Starting with iOS 16, the UICollectionView added this: - (nullable UIContextMenuConfiguration *)collectionView:(UICollectionView *)collectionView contextMenuConfigurationForItemsAtIndexPaths:(NSArray<NSIndexPath *> *)indexPaths point:(CGPoint)point It's strange that UITableView doesn't have this function, did Apple forget to do it? :) Is there a way to achieve the same for UITableView ? Or am I missing something? Is there a way to achieve the same for UITableView? A: In iOS 13 and later, you can show a context menu for multiple selected rows in a UITableView by implementing the tableView(_:contextMenuConfigurationForRowAtIndexPath:point:) method and returning a UIContextMenuConfiguration object with the desired actions. Here is an example of how you could do this: - (nullable UIContextMenuConfiguration *)tableView:(UITableView *)tableView contextMenuConfigurationForRowAtIndexPath:(NSIndexPath *)indexPath point:(CGPoint)point { // create an array of selected index paths NSArray *selectedIndexPaths = [tableView indexPathsForSelectedRows]; // create a UIMenu with the desired actions UIMenu *menu = [UIMenu menuWithTitle:@"Actions" children:@[ [UIAction actionWithTitle:@"Action 1" image:nil identifier:nil handler:^(__kindof UIAction * _Nonnull action) { // handle action 1 }], [UIAction actionWithTitle:@"Action 2" image:nil identifier:nil handler:^(__kindof UIAction * _Nonnull action) { // handle action 2 }] ]]; // create a UIContextMenuConfiguration with the menu and the selected index paths UIContextMenuConfiguration *config = [UIContextMenuConfiguration configurationWithIdentifier:nil previewProvider:nil actionProvider:^UIMenu * _Nullable(NSArray<UIMenuElement *> * _Nonnull suggestedActions) { return menu; }]; return config; } In this example, the tableView(_:contextMenuConfigurationForRowAtIndexPath:point:) method creates an array of selected index paths, then creates a UIMenu with the desired actions, and finally returns a UIContextMenuConfiguration object with the UIMenu and the selected index paths. This should allow you to show a context menu for multiple selected rows in a UITableView. In iOS 14 and later, UITableView has gained a new method, tableView(:contextMenuConfigurationForRowAt:point:), which allows you to show a context menu for multiple selected rows in a more straightforward way. This method works similarly to the collectionView(:contextMenuConfigurationForItemsAtIndexPaths:point:) method that was added in iOS 16.
contextMenuConfigurationForItemsAtIndexPaths for UITableView?
We know that iOS 13 adds the ContextMenu function to UITableView, we can use the following method to display the context menu of UITableViewCell: - (nullable UIContextMenuConfiguration *)tableView:(UITableView *)tableView contextMenuConfigurationForRowAtIndexPath:(NSIndexPath *)indexPath point:(CGPoint)point But how can we show a menu for multiple selected rows? Like the "Files" app? Starting with iOS 16, the UICollectionView added this: - (nullable UIContextMenuConfiguration *)collectionView:(UICollectionView *)collectionView contextMenuConfigurationForItemsAtIndexPaths:(NSArray<NSIndexPath *> *)indexPaths point:(CGPoint)point It's strange that UITableView doesn't have this function, did Apple forget to do it? :) Is there a way to achieve the same for UITableView ? Or am I missing something? Is there a way to achieve the same for UITableView?
[ "In iOS 13 and later, you can show a context menu for multiple selected rows in a UITableView by implementing the tableView(_:contextMenuConfigurationForRowAtIndexPath:point:) method and returning a UIContextMenuConfiguration object with the desired actions. Here is an example of how you could do this:\n- (nullable UIContextMenuConfiguration *)tableView:(UITableView *)tableView contextMenuConfigurationForRowAtIndexPath:(NSIndexPath *)indexPath point:(CGPoint)point {\n // create an array of selected index paths\n NSArray *selectedIndexPaths = [tableView indexPathsForSelectedRows];\n\n // create a UIMenu with the desired actions\n UIMenu *menu = [UIMenu menuWithTitle:@\"Actions\"\n children:@[\n [UIAction actionWithTitle:@\"Action 1\" image:nil identifier:nil handler:^(__kindof UIAction * _Nonnull action) {\n // handle action 1\n }],\n [UIAction actionWithTitle:@\"Action 2\" image:nil identifier:nil handler:^(__kindof UIAction * _Nonnull action) {\n // handle action 2\n }]\n ]];\n\n // create a UIContextMenuConfiguration with the menu and the selected index paths\n UIContextMenuConfiguration *config = [UIContextMenuConfiguration configurationWithIdentifier:nil previewProvider:nil actionProvider:^UIMenu * _Nullable(NSArray<UIMenuElement *> * _Nonnull suggestedActions) {\n return menu;\n }];\n\n return config;\n}\n\nIn this example, the tableView(_:contextMenuConfigurationForRowAtIndexPath:point:) method creates an array of selected index paths, then creates a UIMenu with the desired actions, and finally returns a UIContextMenuConfiguration object with the UIMenu and the selected index paths. This should allow you to show a context menu for multiple selected rows in a UITableView.\nIn iOS 14 and later, UITableView has gained a new method, tableView(:contextMenuConfigurationForRowAt:point:), which allows you to show a context menu for multiple selected rows in a more straightforward way. This method works similarly to the collectionView(:contextMenuConfigurationForItemsAtIndexPaths:point:) method that was added in iOS 16.\n" ]
[ 0 ]
[]
[]
[ "contextmenu", "ios", "uitableview" ]
stackoverflow_0074677396_contextmenu_ios_uitableview.txt
Q: Why does Training time not reduce when training a keras model after Increasing the batch size in beyond a certain amount I am currently traing an NLP model in Keras with TF 2.8 where I am experimenting by adding GRU and LSTM layers. When I train the model, I used different batch size to see the impact it had on the accuracy and overal training time. What I noticed was that after Increasing the batch size after a certain amount the training time doesnt reduce, after a certain amount the training size stayed the same. I started with a batch size of 2 then slowly increased upto 4096 trying multiples of two, yet after 512 the training time remained the same. A: It's often wrongly mentioned that batch learning is as fast or faster than on-line training. In fact, batch-learning is changing the weights once, the complete set of data (the batch) has been presented to the network. Therefore, the weight update frequency is rather slow. This explains why the processing speed in your measurements acts like you observed. Even if its matrix operation, each row-colum multiplication might be happening on one gpu-core. So, full matrix multiplication is divided on as many cores as possible. For one matrix mul, each gpu-core takes some time, and when you add more images, that time increases, do more rows. If at batch size of 4, your gpu is already at full performance capacity, i.e. all cores are running, then increasing batch size is not going to give any advantage. Your added data just sits in gpu memory and is processed when an nvidia dice gets free of previous operation. To get a further understanding for the training techniques, have a look at the 2003 paper The general inefficiency of batch training for gradient descent learning. It deals with the comparison of batch and on-line learning. Also generally, RNN kernels can have O(timesteps) complexity, with batch size having a smaller effect than you might anticipate.
Why does Training time not reduce when training a keras model after Increasing the batch size in beyond a certain amount
I am currently traing an NLP model in Keras with TF 2.8 where I am experimenting by adding GRU and LSTM layers. When I train the model, I used different batch size to see the impact it had on the accuracy and overal training time. What I noticed was that after Increasing the batch size after a certain amount the training time doesnt reduce, after a certain amount the training size stayed the same. I started with a batch size of 2 then slowly increased upto 4096 trying multiples of two, yet after 512 the training time remained the same.
[ "It's often wrongly mentioned that batch learning is as fast or faster than on-line training. In fact, batch-learning is changing the weights once, the complete set of data (the batch) has been presented to the network. Therefore, the weight update frequency is rather slow. This explains why the processing speed in your measurements acts like you observed.\nEven if its matrix operation, each row-colum multiplication might be happening on one gpu-core. So, full matrix multiplication is divided on as many cores as possible. For one matrix mul, each gpu-core takes some time, and when you add more images, that time increases, do more rows. If at batch size of 4, your gpu is already at full performance capacity, i.e. all cores are running, then increasing batch size is not going to give any advantage. Your added data just sits in gpu memory and is processed when an nvidia dice gets free of previous operation.\nTo get a further understanding for the training techniques, have a look at the 2003 paper The general inefficiency of batch training for gradient descent learning. It deals with the comparison of batch and on-line learning.\nAlso generally, RNN kernels can have O(timesteps) complexity, with batch size having a smaller effect than you might anticipate.\n" ]
[ 2 ]
[]
[]
[ "tensorflow2.0", "tf.keras" ]
stackoverflow_0074677891_tensorflow2.0_tf.keras.txt
Q: No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK in IntelliJ I tried mvn clean install in Intellij. But it raise this error. mvn -version command works fine in the terminal of intellij. A: It worked when I set JAVA_HOME and use command prompt for executing mvn clean install. The problem with the IntelliJ is we can't add bin path of the JDK. Is there a way to solve this? Image A: Run IntelliJ as administrator in case of windows
No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK in IntelliJ
I tried mvn clean install in Intellij. But it raise this error. mvn -version command works fine in the terminal of intellij.
[ "It worked when I set JAVA_HOME and use command prompt for executing mvn clean install. The problem with the IntelliJ is we can't add bin path of the JDK. Is there a way to solve this? Image\n", "Run IntelliJ as administrator in case of windows\n" ]
[ 4, 0 ]
[]
[]
[ "intellij_idea", "maven", "maven_compiler_plugin" ]
stackoverflow_0048071278_intellij_idea_maven_maven_compiler_plugin.txt
Q: Render markdown files contain mermaid diagrams to a combined PDF file using mkdocs Currently, I'm using mkdocs-materialto use mermaid diagrams, configured as follows (in mkdocs.yml): ... markdown_extensions: - pymdownx.superfences: custom_fences: - name: mermaid class: mermaid ... However, I encounter troubles with PDF exporting. I have tried several plugins. Most of them depend on Weasy Print and have problems with javascript parts or mermaid diagrams (didn't render and still in code block's style). There is one plugin (mkdocs-pdf-with-js-plugin) that prints pages in an easy and simple way which uses browser to do the job. However, it doesn't contain the combined feature (combine all pages into a single PDF file) that I need as mkdocs-pdf-export-plugin package. Is there any other plugins that support exporting PDF with mermaid diagrams and combine feature? A: My current workaround Run: ENABLE_PDF_EXPORT=1 mkdocs build. Each markdown file will be exported to a PDF file. Then, I will define the order of all PDFs when merging into one unique file by putting the PDF name from top to bottom: In chapters.txt: A.pdf B.pdf C.pdf ... Then run the following script. Remember that this script is just a hint of what I have done, it has not been completed yet and has not run "as is". # ================================================================================================ # Move all pdfs from "site" (the output dir of pdf exporting) to the scripts/pdf_export/pdfs # ================================================================================================ find site -name "*.pdf" -exec mv {} scripts/pdf_export/pdfs \; cd scripts/pdf_export/pdfs # ================================================================================================ # Merge all pdfs into one single pdf file wrt the file name's order in chapters.txt # ================================================================================================ # REMEMBER to put the chapters.txt into scripts/pdf_export/pdfs. # Install: https://www.pdflabs.com/tools/pdftk-server/ # Install for M1 only: https://stackoverflow.com/a/60889993/6563277 to avoid the "pdftk: Bad CPU type in executable" on Mac pdftk $(cat chapters.txt) cat output book.pdf # ================================================================================================ # Add page numbers # ================================================================================================ # Count pages https://stackoverflow.com/a/27132157/6563277 pageCount=$(pdftk book.pdf dump_data | grep "NumberOfPages" | cut -d":" -f2) # Turn back to scripts/pdf_export cd .. # https://stackoverflow.com/a/30416992/6563277 # Create an overlay pdf file containing only page numbers gs -o pagenumbers.pdf \ -sDEVICE=pdfwrite \ -g5950x8420 \ -c "/Helvetica findfont \ 12 scalefont setfont \ 1 1 ${pageCount} { \ /PageNo exch def \ 450 20 moveto \ (Page ) show \ PageNo 3 string cvs \ show \ ( of ${pageCount}) show \ showpage \ } for" # Blend pagenumbers.pdf with the original pdf file pdftk pdfs/book.pdf \ multistamp pagenumbers.pdf \ output final_book.pdf However, we need other customization like table of contents, book cover, and author section, ... All the above steps are just merging and adding page nums! Lots of things to do.
Render markdown files contain mermaid diagrams to a combined PDF file using mkdocs
Currently, I'm using mkdocs-materialto use mermaid diagrams, configured as follows (in mkdocs.yml): ... markdown_extensions: - pymdownx.superfences: custom_fences: - name: mermaid class: mermaid ... However, I encounter troubles with PDF exporting. I have tried several plugins. Most of them depend on Weasy Print and have problems with javascript parts or mermaid diagrams (didn't render and still in code block's style). There is one plugin (mkdocs-pdf-with-js-plugin) that prints pages in an easy and simple way which uses browser to do the job. However, it doesn't contain the combined feature (combine all pages into a single PDF file) that I need as mkdocs-pdf-export-plugin package. Is there any other plugins that support exporting PDF with mermaid diagrams and combine feature?
[ "My current workaround\nRun: ENABLE_PDF_EXPORT=1 mkdocs build. Each markdown file will be exported to a PDF file.\nThen, I will define the order of all PDFs when merging into one unique file by putting the PDF name from top to bottom:\nIn chapters.txt:\nA.pdf\nB.pdf\nC.pdf\n...\n\nThen run the following script. Remember that this script is just a hint of what I have done, it has not been completed yet and has not run \"as is\".\n# ================================================================================================\n# Move all pdfs from \"site\" (the output dir of pdf exporting) to the scripts/pdf_export/pdfs\n# ================================================================================================\nfind site -name \"*.pdf\" -exec mv {} scripts/pdf_export/pdfs \\;\n\ncd scripts/pdf_export/pdfs\n\n# ================================================================================================\n# Merge all pdfs into one single pdf file wrt the file name's order in chapters.txt\n# ================================================================================================\n# REMEMBER to put the chapters.txt into scripts/pdf_export/pdfs.\n# Install: https://www.pdflabs.com/tools/pdftk-server/\n# Install for M1 only: https://stackoverflow.com/a/60889993/6563277 to avoid the \"pdftk: Bad CPU type in executable\" on Mac\npdftk $(cat chapters.txt) cat output book.pdf\n\n# ================================================================================================\n# Add page numbers\n# ================================================================================================\n# Count pages https://stackoverflow.com/a/27132157/6563277\npageCount=$(pdftk book.pdf dump_data | grep \"NumberOfPages\" | cut -d\":\" -f2)\n\n# Turn back to scripts/pdf_export\ncd ..\n\n# https://stackoverflow.com/a/30416992/6563277\n# Create an overlay pdf file containing only page numbers\ngs -o pagenumbers.pdf \\\n -sDEVICE=pdfwrite \\\n -g5950x8420 \\\n -c \"/Helvetica findfont \\\n 12 scalefont setfont \\\n 1 1 ${pageCount} { \\\n /PageNo exch def \\\n 450 20 moveto \\\n (Page ) show \\\n PageNo 3 string cvs \\\n show \\\n ( of ${pageCount}) show \\\n showpage \\\n } for\"\n\n# Blend pagenumbers.pdf with the original pdf file\npdftk pdfs/book.pdf \\\n multistamp pagenumbers.pdf \\\n output final_book.pdf\n\nHowever, we need other customization like table of contents, book cover, and author section, ... All the above steps are just merging and adding page nums! Lots of things to do.\n" ]
[ 0 ]
[]
[]
[ "mermaid", "mkdocs", "pdf", "python" ]
stackoverflow_0074602739_mermaid_mkdocs_pdf_python.txt
Q: Error when running watchman When I run react-native start, I am getting the following message Error: A non-recoverable condition has triggered. Watchman needs your help! The triggering condition was at timestamp=1489123194: inotify-add-watch(/var/www/html/eventManager/android/app/src/main/res/mipmap-mdpi) -> The user limit on the total number of inotify watches was reached; increase the fs.inotify.max_user_watches sysctl All requests will continue to fail with this message until you resolve the underlying problem. You will find more information on fixing this at https://facebook.github.io/watchman/docs/troubleshooting.html#poison-inotify-add-watch at ChildProcess.<anonymous> (/var/www/html/bookLister/node_modules/fb-watchman/index.js:207:21) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:191:7) at maybeClose (internal/child_process.js:852:16) at Socket.<anonymous> (internal/child_process.js:323:11) at emitOne (events.js:96:13) at Socket.emit (events.js:188:7) at Pipe._handle.close [as _onclose] (net.js:492:12) A: echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p watchman shutdown-server This one helped A: Just run these commands in terminal: echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches watchman shutdown-server Other Way make Script in package.json "scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", "test": "jest", "flow": "flow", "flow-stop": "flow stop", "watch-need-help": "echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && watchman shutdown-server" }, Run following command on Terminal in project directory npm run watch-need-help A: Increase inotify limit to increase the limit on the number of files you can monitor. $ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf $ sudo sysctl -p Please go through this for more info A: This One is Also Helpfull . echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server A: It works for me watchman watch-del-all watchman shutdown-server Here is link which I follow. A: updating watchman to latest(4.7.0) version helped me solve this problem. A: this one helped found it on github issues echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server && sudo sysctl -p https://github.com/facebook/watchman/issues/163 by @eladcandroid [1] A: You can solve it by trying one of the below solutions: first pass hit line of code in your terminal and test it : echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server or try to use "react-native run-android" or "run-ios" in the root of your project and then (in other terminal) "react-native start" otherwise perhaps the debugger page was left open from previous sessions. Closing the tab completely and then opening the debugger in a new tab solved the issue. if none of the above solution doesn't work with you try to restart your PC A: i faced the same problem even after reinstalling watchman using homebrew: after deleting the pid, log and sock files, the following steps worked for me: touch pid && touch log && touch sock watchman --foreground --logfile=princeakoensi-state/log Allowed accessibility permissions : sudo chmod 700 /usr/local -closed all terminals and re-runned my commands. That should help.
Error when running watchman
When I run react-native start, I am getting the following message Error: A non-recoverable condition has triggered. Watchman needs your help! The triggering condition was at timestamp=1489123194: inotify-add-watch(/var/www/html/eventManager/android/app/src/main/res/mipmap-mdpi) -> The user limit on the total number of inotify watches was reached; increase the fs.inotify.max_user_watches sysctl All requests will continue to fail with this message until you resolve the underlying problem. You will find more information on fixing this at https://facebook.github.io/watchman/docs/troubleshooting.html#poison-inotify-add-watch at ChildProcess.<anonymous> (/var/www/html/bookLister/node_modules/fb-watchman/index.js:207:21) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:191:7) at maybeClose (internal/child_process.js:852:16) at Socket.<anonymous> (internal/child_process.js:323:11) at emitOne (events.js:96:13) at Socket.emit (events.js:188:7) at Pipe._handle.close [as _onclose] (net.js:492:12)
[ "echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances\necho 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events\necho 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches\necho fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf\nsudo sysctl -p\n\n\nwatchman shutdown-server\n\nThis one helped\n", "Just run these commands in terminal:\necho 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances\necho 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events\necho 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches\nwatchman shutdown-server \n\nOther Way make Script in package.json\n\"scripts\": {\n \"start\": \"node node_modules/react-native/local-cli/cli.js start\",\n \"test\": \"jest\",\n \"flow\": \"flow\",\n \"flow-stop\": \"flow stop\",\n \"watch-need-help\": \"echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && watchman shutdown-server\"\n },\n\nRun following command on Terminal in project directory \nnpm run watch-need-help\n\n", "Increase inotify limit to increase the limit on the number of files you can monitor.\n$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf\n$ sudo sysctl -p\n\nPlease go through this for more info\n", "This One is Also Helpfull .\necho 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server\n\n", "It works for me\nwatchman watch-del-all\nwatchman shutdown-server\n\nHere is link which I follow.\n", "updating \n\nwatchman\n\nto latest(4.7.0) version helped me solve this problem.\n", "this one helped found it on github issues\necho 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server && sudo sysctl -p\n\nhttps://github.com/facebook/watchman/issues/163 by @eladcandroid [1]\n", "You can solve it by trying one of the below solutions:\nfirst pass hit line of code in your terminal and test it :\necho 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_watches && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_queued_events && echo 999999 | sudo tee -a /proc/sys/fs/inotify/max_user_instances && watchman shutdown-server\n\nor try to use \"react-native run-android\" or \"run-ios\" in the root of your project and then (in other terminal) \"react-native start\"\notherwise perhaps the debugger page was left open from previous sessions. Closing the tab completely and then opening the debugger in a new tab solved the issue.\nif none of the above solution doesn't work with you try to restart your PC\n", "i faced the same problem even after reinstalling watchman using homebrew: after deleting the pid, log and sock files, the following steps worked for me:\n\ntouch pid && touch log && touch sock\nwatchman --foreground --logfile=princeakoensi-state/log\nAllowed accessibility permissions : sudo chmod 700 /usr/local\n-closed all terminals and re-runned my commands.\nThat should help.\n\n" ]
[ 46, 6, 5, 3, 3, 1, 1, 0, 0 ]
[]
[]
[ "react_native", "watchman" ]
stackoverflow_0042711008_react_native_watchman.txt
Q: How to drop rows of a column having float datatype and are values less than 1 I am new to pandas and I have just started to learn how to analyze a data. In order to explain y problem, Consider this table as df.csv Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 D 4 0.6 From this file, I want to drop the row that has Height less than 1 so that when i pass this command, it would delete the specified row and show me this: Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 I wrote this command: dec = df[df['Height']<0.0].index df.drop(dec,inplace=true) df but it is showing me this: Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 D 4 0.6 instead of : Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 is there a way to achieve this? A: dec = df[df['Height'] < 1.0].index df.drop(dec, inplace=True) True and False are written in capital letters and the check is needed for 1 and not for 0.
How to drop rows of a column having float datatype and are values less than 1
I am new to pandas and I have just started to learn how to analyze a data. In order to explain y problem, Consider this table as df.csv Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 D 4 0.6 From this file, I want to drop the row that has Height less than 1 so that when i pass this command, it would delete the specified row and show me this: Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 I wrote this command: dec = df[df['Height']<0.0].index df.drop(dec,inplace=true) df but it is showing me this: Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 D 4 0.6 instead of : Name Age Height A 2 5.7 B 4 5.4 C 8 5.9 is there a way to achieve this?
[ "dec = df[df['Height'] < 1.0].index\ndf.drop(dec, inplace=True)\n\nTrue and False are written in capital letters and the check is needed for 1 and not for 0.\n" ]
[ 0 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074664750_csv_dataframe_pandas_python.txt
Q: Next.js "Text content did not match" I have component in Next.js which displays data pulled from a database, this is all fine. The problem occurs when I attempt to format the dates, I get the following warning Warning: Text content did not match. I roughly understand that it's to do with the client data being out of sync with the server data but I'm not sure the best approach to fix it. I have seen a solution using useEffect but my knowledge of this hook is still a little basic. My current thinking is to format the dates and then add them to the projects object, then they can be mapped out with the rest of the data, does that sound like a valid idea? Here's what I currently have: import { useState, useEffect } from 'react'; export default function ProjectList(props) { const [projects, setProjects] = useState(props.projects.data); // format the date function formatStartDate(startDate) { return Intl.DateTimeFormat('default', { month: 'short', day: '2-digit', hour: '2-digit', minute: '2-digit', }).format(new Date(startDate)); } useEffect(() => { setProjects(props.projects.data); }, [props]); return ( projects.length > 0 && ( <> {projects && projects.map((project) => ( <div key={project.id}> <h2> {project.attributes.project_name}</h2> <p>{formatStartDate(project.attributes.start_date)}</p> </div> ))} </> ) ); } Thanks! A: Yes, you are on the right track. On the useEffect, you need to format the dates and update the projects state. This ensures that the dates are correctly formatted on the server side and the client data is in sync with the server data. import { useState, useEffect } from "react"; export default function ProjectList(props) { const [projects, setProjects] = useState(props.projects.data); // format the date function formatStartDate(startDate) { return Intl.DateTimeFormat("default", { month: "short", day: "2-digit", hour: "2-digit", minute: "2-digit", }).format(new Date(startDate)); } useEffect(() => { // format the dates and update the projects state const formattedProjects = props.projects.data.map((project) => { return { ...project, attributes: { ...project.attributes, start_date: formatStartDate(project.attributes.start_date), }, }; }); setProjects(formattedProjects); }, [props]); return ( projects.length > 0 && ( <> {projects && projects.map((project) => ( <div key={project.id}> <h2> {project.attributes.project_name}</h2> <p>{project.attributes.start_date}</p> </div> ))} </> ) ); }
Next.js "Text content did not match"
I have component in Next.js which displays data pulled from a database, this is all fine. The problem occurs when I attempt to format the dates, I get the following warning Warning: Text content did not match. I roughly understand that it's to do with the client data being out of sync with the server data but I'm not sure the best approach to fix it. I have seen a solution using useEffect but my knowledge of this hook is still a little basic. My current thinking is to format the dates and then add them to the projects object, then they can be mapped out with the rest of the data, does that sound like a valid idea? Here's what I currently have: import { useState, useEffect } from 'react'; export default function ProjectList(props) { const [projects, setProjects] = useState(props.projects.data); // format the date function formatStartDate(startDate) { return Intl.DateTimeFormat('default', { month: 'short', day: '2-digit', hour: '2-digit', minute: '2-digit', }).format(new Date(startDate)); } useEffect(() => { setProjects(props.projects.data); }, [props]); return ( projects.length > 0 && ( <> {projects && projects.map((project) => ( <div key={project.id}> <h2> {project.attributes.project_name}</h2> <p>{formatStartDate(project.attributes.start_date)}</p> </div> ))} </> ) ); } Thanks!
[ "Yes, you are on the right track. On the useEffect, you need to format the dates and update the projects state. This ensures that the dates are correctly formatted on the server side and the client data is in sync with the server data.\nimport { useState, useEffect } from \"react\";\n\nexport default function ProjectList(props) {\n const [projects, setProjects] = useState(props.projects.data);\n\n // format the date\n function formatStartDate(startDate) {\n return Intl.DateTimeFormat(\"default\", {\n month: \"short\",\n day: \"2-digit\",\n hour: \"2-digit\",\n minute: \"2-digit\",\n }).format(new Date(startDate));\n }\n\n useEffect(() => {\n // format the dates and update the projects state\n const formattedProjects = props.projects.data.map((project) => {\n return {\n ...project,\n attributes: {\n ...project.attributes,\n start_date: formatStartDate(project.attributes.start_date),\n },\n };\n });\n\n setProjects(formattedProjects);\n }, [props]);\n\n return (\n projects.length > 0 && (\n <>\n {projects &&\n projects.map((project) => (\n <div key={project.id}>\n <h2> {project.attributes.project_name}</h2>\n <p>{project.attributes.start_date}</p>\n </div>\n ))}\n </>\n )\n );\n}\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "next.js", "react_hooks", "reactjs" ]
stackoverflow_0074677448_javascript_next.js_react_hooks_reactjs.txt
Q: Matlab code of Transmission coefficient vs Wavelength I'm trying to plot a graph of Transmission coefficient vs wavelength, but I get errors in such code. my code is: syms n1 n2 n3 i theta k1z u1 u2 k2z u3 k3z lamda lam a M12 = 1/2*[1+(u2./u1)*(k1z./k2z),1-(u2./u1)*(k1z./k2z) ; 1-(u2./u1)*(k1z./k2z),1+(u2./u1)*(k1z./k2z)]; Mp = [exp(1i*k2z*a),0 ; 0,exp(-1i*k2z*a)]; M23 = 1/2.*[1+(u3./u2)*(k2z./k3z),1-(u3./u2)*(k2z./k3z) ; 1-(u3./u2)*(k2z./k3z),1+(u3./u2)*(k2z./k3z)]; Ms = M12*Mp*M23; lamda = 1:2:20; lam = 10.^(-9).*lamda; n1 = 1; n2 = 1.5; n3 = 1; u1 = 1; u2 = 1; u3 = 1; a = 0.01; N = length(lam); for j= 1:N k1z(j)=6.28*n1./lam(j);k2z(j)=6.28*n2./lam(j);k3z(j)=6.28*n3./lam(j); Ms=[exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2) + exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2),- exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2) - exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2); - exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2) - exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2),exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2) + exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2)]; ts(i)=det(Ms)./Ms(2,2); T=(u1./u3)*(k3z./k1z).*abs(ts).^2; end figure hold on plot(lam,T,'.r') I tried to plot the graph but I get an error as "the size of the indicated variable or array appears to be changing with each loop iteration. consider preallocating for speed". I don't know what I do this. A: To plot a symbolic expression your have to evaluate it first. You can do so by Tev=T(lam). Your plotting command would then look like this: plot(lam,T(lam),'.r')
Matlab code of Transmission coefficient vs Wavelength
I'm trying to plot a graph of Transmission coefficient vs wavelength, but I get errors in such code. my code is: syms n1 n2 n3 i theta k1z u1 u2 k2z u3 k3z lamda lam a M12 = 1/2*[1+(u2./u1)*(k1z./k2z),1-(u2./u1)*(k1z./k2z) ; 1-(u2./u1)*(k1z./k2z),1+(u2./u1)*(k1z./k2z)]; Mp = [exp(1i*k2z*a),0 ; 0,exp(-1i*k2z*a)]; M23 = 1/2.*[1+(u3./u2)*(k2z./k3z),1-(u3./u2)*(k2z./k3z) ; 1-(u3./u2)*(k2z./k3z),1+(u3./u2)*(k2z./k3z)]; Ms = M12*Mp*M23; lamda = 1:2:20; lam = 10.^(-9).*lamda; n1 = 1; n2 = 1.5; n3 = 1; u1 = 1; u2 = 1; u3 = 1; a = 0.01; N = length(lam); for j= 1:N k1z(j)=6.28*n1./lam(j);k2z(j)=6.28*n2./lam(j);k3z(j)=6.28*n3./lam(j); Ms=[exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2) + exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2),- exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2) - exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2); - exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2) - exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2),exp(-a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) + 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) + 1./2) + exp(a*k2z(j)*1i)*((k1z(j)*u2)./(2*k2z(j)*u1) - 1./2)*((k2z(j)*u3)./(2*k3z(j)*u2) - 1./2)]; ts(i)=det(Ms)./Ms(2,2); T=(u1./u3)*(k3z./k1z).*abs(ts).^2; end figure hold on plot(lam,T,'.r') I tried to plot the graph but I get an error as "the size of the indicated variable or array appears to be changing with each loop iteration. consider preallocating for speed". I don't know what I do this.
[ "To plot a symbolic expression your have to evaluate it first. You can do so by\nTev=T(lam).\nYour plotting command would then look like this:\nplot(lam,T(lam),'.r')\n\n" ]
[ 0 ]
[]
[]
[ "matlab" ]
stackoverflow_0074677316_matlab.txt
Q: Django.models custom blank value thanks for tanking the time to look at this query. I'm setting an ID field within one of my Django models. This is a CharField and looks like the following: my_id = models.CharField(primary_key=True, max_length=5, validators=[RegexValidator( regex=ID_REGEX, message=ID_ERR_MSG, code=ID_ERR_CODE )]) I would like to add a default/blank or null option that calls a global or class function that will cycle through the existing IDs, find the first one that doesn't exist and assign it as the next user ID. However, when I add the call blank=foo() I get an error code that the function doesn't exist. Best, pb Edit1: I also tried using a separate utils file and importing the function, but (unsurprisingly) I get a circular import error as I need the call the class to get the objects. Edit2 (Reply to Eugene): Tried that, solved the circular import but I'm getting the following error: TypeError: super(type, obj): obj must be an instance or subtype of type Previously my override of the save function worked perfectly: def save(self, *args, **kwargs): self.full_clean() super(Staff, self).save(*args, **kwargs) The custom id function: def get_id_default(): from .models import MyObj for temp_id in range(10_000, 100_000): try: MyObj.objects.get(my_id=str(temp_id)) except ObjectDoesNotExist: break # Id doesn't exist return str(hive_id) Edit 3 (Reply to PersonPr7): Unfortunately, the kwargs doesn't seem to have my id in it. Actually, after having a print the kwargs dictionary comes back empty. Save function: def save(self, *args, **kwargs): print(kwargs) # --> Returns {} if kwargs["my_id"] is None: kwargs["my_id"] = self.get_id_default() self.full_clean() super(Staff, self).save(*args, **kwargs) Where the get_id_default is a class function: def get_id_default(self): for temp_id in range(10_000, 100000): try: self.objects.get(my_id=str(temp_id)) except ObjectDoesNotExist: break # Id doesn't exist return str(temp_id) Solution1: For those who are may be struggling with this in the future: Create a utils/script .py file (or whatever you wanna call it) and create your custom script inside. from .models import MyModel def my_custom_default: # your custom code return your_value Inside the main.models.py file. from django.db import models from .my_utils import my_custom_default class MyModel(model.Model): my_field = models.SomeField(..., default=my_custom_default) Solution2: Create a static function within your Model class that will create your default value. @staticmethod def get_my_default(): # your logic return your_value # NOTE: Initially I had the function use self # to retrieve the objects (self.objects.get(...)) # However, this raised an exception: AttributeError: # Manager isn't accessible via Sites instances When setting up your model give your field some kind of default i.e. default=None Additionally, you need to override the models save function like so: def save(self, *args, **kwargs): if self.your_field is None: self.my_field = self.get_my_default() self.full_clean() super().save(*args, **kwargs) A: Try overriding the Model's save method and performing the logic there: def save(self, *args, **kwargs): #Custom logic super().save(*args, **kwargs) Edit: You don't need to use **kwargs. You can access your whole model from the save method and loop over objects / ids.
Django.models custom blank value
thanks for tanking the time to look at this query. I'm setting an ID field within one of my Django models. This is a CharField and looks like the following: my_id = models.CharField(primary_key=True, max_length=5, validators=[RegexValidator( regex=ID_REGEX, message=ID_ERR_MSG, code=ID_ERR_CODE )]) I would like to add a default/blank or null option that calls a global or class function that will cycle through the existing IDs, find the first one that doesn't exist and assign it as the next user ID. However, when I add the call blank=foo() I get an error code that the function doesn't exist. Best, pb Edit1: I also tried using a separate utils file and importing the function, but (unsurprisingly) I get a circular import error as I need the call the class to get the objects. Edit2 (Reply to Eugene): Tried that, solved the circular import but I'm getting the following error: TypeError: super(type, obj): obj must be an instance or subtype of type Previously my override of the save function worked perfectly: def save(self, *args, **kwargs): self.full_clean() super(Staff, self).save(*args, **kwargs) The custom id function: def get_id_default(): from .models import MyObj for temp_id in range(10_000, 100_000): try: MyObj.objects.get(my_id=str(temp_id)) except ObjectDoesNotExist: break # Id doesn't exist return str(hive_id) Edit 3 (Reply to PersonPr7): Unfortunately, the kwargs doesn't seem to have my id in it. Actually, after having a print the kwargs dictionary comes back empty. Save function: def save(self, *args, **kwargs): print(kwargs) # --> Returns {} if kwargs["my_id"] is None: kwargs["my_id"] = self.get_id_default() self.full_clean() super(Staff, self).save(*args, **kwargs) Where the get_id_default is a class function: def get_id_default(self): for temp_id in range(10_000, 100000): try: self.objects.get(my_id=str(temp_id)) except ObjectDoesNotExist: break # Id doesn't exist return str(temp_id) Solution1: For those who are may be struggling with this in the future: Create a utils/script .py file (or whatever you wanna call it) and create your custom script inside. from .models import MyModel def my_custom_default: # your custom code return your_value Inside the main.models.py file. from django.db import models from .my_utils import my_custom_default class MyModel(model.Model): my_field = models.SomeField(..., default=my_custom_default) Solution2: Create a static function within your Model class that will create your default value. @staticmethod def get_my_default(): # your logic return your_value # NOTE: Initially I had the function use self # to retrieve the objects (self.objects.get(...)) # However, this raised an exception: AttributeError: # Manager isn't accessible via Sites instances When setting up your model give your field some kind of default i.e. default=None Additionally, you need to override the models save function like so: def save(self, *args, **kwargs): if self.your_field is None: self.my_field = self.get_my_default() self.full_clean() super().save(*args, **kwargs)
[ "Try overriding the Model's save method and performing the logic there:\ndef save(self, *args, **kwargs):\n #Custom logic\n super().save(*args, **kwargs)\n\nEdit:\nYou don't need to use **kwargs.\nYou can access your whole model from the save method and loop over objects / ids.\n" ]
[ 1 ]
[]
[]
[ "django", "python_3.10" ]
stackoverflow_0074677577_django_python_3.10.txt
Q: Pandas groupy "aggregate" does not see column I am working on a huge database where I did a pandas apply to categorize the type of cliente based on the type of the product he consumed: Sample DF: import pandas as pd import numpy as np from datetime import datetime num_variables = 1000 rng = np.random.default_rng() data = pd.DataFrame({ 'id' : np.random.randint(1,999999999,num_variables), 'date' : [np.random.choice(pd.date_range(datetime(2021,1,1),datetime(2022,12,31))) for i in range(num_variables)], 'product' : [np.random.choice(['giftcards', 'afiliates']) for i in range(num_variables)], 'brand' : [np.random.choice(['brand_1', 'brand_2', 'brand_4', 'brand_6']) for i in range(num_variables)], 'gmv' : rng.random(num_variables) * 100, 'revenue' : rng.random(num_variables) * 100,}) data = data.astype({'product':'category', 'brand':'category'}) base = data.groupby(['id', 'product']).aggregate({'product' : 'count'}) base = base.unstack() Now I need to group clients by the "type" column and just count how much there are in each group. first, apply the categorization function and its application : def setup(row): if row[('product', 'afiliates')] >= 1 and row[('product', 'giftcards')] == 0: return 'afiliates' if row[('product', 'afiliates')] == 0 and row[('product', 'giftcards')] >= 1: return 'gift' if row[('product', 'afiliates')] >= 1 and row[('product', 'giftcards')] >= 1: return 'both' base['type'] = base.apply(setup, axis=1) base.reset_index(inplace=True) So far, so good. If I run an groupby.agg, I get these results: results = base[['type','id']].groupby(['type'], dropna=False).agg('count') but if instead of agg I try an agregate, it does not work. results = base[['type','id']].groupby(['type']).aggregate({'id': 'count'}) Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[10], line 2 1 #results = base[['type','id']].groupby(['type'], dropna=False).agg('count') ----> 2 results = base[['type','id']].groupby(['type']).aggregate({'id': 'count'}) File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\groupby\generic.py:894, in DataFrameGroupBy.aggregate(self, func, engine, engine_kwargs, *args, **kwargs) 891 func = maybe_mangle_lambdas(func) 893 op = GroupByApply(self, func, args, kwargs) --> 894 result = op.agg() 895 if not is_dict_like(func) and result is not None: 896 return result File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\apply.py:169, in Apply.agg(self) 166 return self.apply_str() 168 if is_dict_like(arg): --> 169 return self.agg_dict_like() 170 elif is_list_like(arg): 171 # we require a list, but not a 'str' 172 return self.agg_list_like() File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\apply.py:478, in Apply.agg_dict_like(self) 475 selected_obj = obj._selected_obj 476 selection = obj._selection --> 478 arg = self.normalize_dictlike_arg("agg", selected_obj, arg) ... 606 # eg. {'A' : ['mean']}, normalize all to 607 # be list-likes 608 # Cannot use func.values() because arg may be a Series KeyError: "Column(s) ['id'] do not exist" What am I missing? A: I´ve made the same question on Pandas Github. They helped me, I will reproduce the answer here. you can see how to access your columns using: print(base.columns.tolist()) [('id', ''), ('product', 'afiliates'), ('product', 'giftcards'), ('type', '')] When you have a MultiIndex for columns, you need to specify each level as a tuple. So you can do: base[['type','id']].groupby(['type']).aggregate({('id', ''): 'count'}) Regarding the title of this issue - agg and aggregate are aliases, they do not behave differently. I suppose there is a bit of an oddity here - why can you do base[['id']] but not specify {'id': ...} in agg? The reason is because column selection can return multiple columns (e.g. in the example here, base[['product']] returns a DataFrame with two columns), whereas agg must have one column and one column only. Thus, it is necessary to specify all levels in agg.
Pandas groupy "aggregate" does not see column
I am working on a huge database where I did a pandas apply to categorize the type of cliente based on the type of the product he consumed: Sample DF: import pandas as pd import numpy as np from datetime import datetime num_variables = 1000 rng = np.random.default_rng() data = pd.DataFrame({ 'id' : np.random.randint(1,999999999,num_variables), 'date' : [np.random.choice(pd.date_range(datetime(2021,1,1),datetime(2022,12,31))) for i in range(num_variables)], 'product' : [np.random.choice(['giftcards', 'afiliates']) for i in range(num_variables)], 'brand' : [np.random.choice(['brand_1', 'brand_2', 'brand_4', 'brand_6']) for i in range(num_variables)], 'gmv' : rng.random(num_variables) * 100, 'revenue' : rng.random(num_variables) * 100,}) data = data.astype({'product':'category', 'brand':'category'}) base = data.groupby(['id', 'product']).aggregate({'product' : 'count'}) base = base.unstack() Now I need to group clients by the "type" column and just count how much there are in each group. first, apply the categorization function and its application : def setup(row): if row[('product', 'afiliates')] >= 1 and row[('product', 'giftcards')] == 0: return 'afiliates' if row[('product', 'afiliates')] == 0 and row[('product', 'giftcards')] >= 1: return 'gift' if row[('product', 'afiliates')] >= 1 and row[('product', 'giftcards')] >= 1: return 'both' base['type'] = base.apply(setup, axis=1) base.reset_index(inplace=True) So far, so good. If I run an groupby.agg, I get these results: results = base[['type','id']].groupby(['type'], dropna=False).agg('count') but if instead of agg I try an agregate, it does not work. results = base[['type','id']].groupby(['type']).aggregate({'id': 'count'}) Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[10], line 2 1 #results = base[['type','id']].groupby(['type'], dropna=False).agg('count') ----> 2 results = base[['type','id']].groupby(['type']).aggregate({'id': 'count'}) File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\groupby\generic.py:894, in DataFrameGroupBy.aggregate(self, func, engine, engine_kwargs, *args, **kwargs) 891 func = maybe_mangle_lambdas(func) 893 op = GroupByApply(self, func, args, kwargs) --> 894 result = op.agg() 895 if not is_dict_like(func) and result is not None: 896 return result File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\apply.py:169, in Apply.agg(self) 166 return self.apply_str() 168 if is_dict_like(arg): --> 169 return self.agg_dict_like() 170 elif is_list_like(arg): 171 # we require a list, but not a 'str' 172 return self.agg_list_like() File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\apply.py:478, in Apply.agg_dict_like(self) 475 selected_obj = obj._selected_obj 476 selection = obj._selection --> 478 arg = self.normalize_dictlike_arg("agg", selected_obj, arg) ... 606 # eg. {'A' : ['mean']}, normalize all to 607 # be list-likes 608 # Cannot use func.values() because arg may be a Series KeyError: "Column(s) ['id'] do not exist" What am I missing?
[ "I´ve made the same question on Pandas Github.\nThey helped me, I will reproduce the answer here.\nyou can see how to access your columns using:\nprint(base.columns.tolist())\n[('id', ''), ('product', 'afiliates'), ('product', 'giftcards'), ('type', '')]\n\nWhen you have a MultiIndex for columns, you need to specify each level as a tuple. So you can do:\nbase[['type','id']].groupby(['type']).aggregate({('id', ''): 'count'})\n\nRegarding the title of this issue - agg and aggregate are aliases, they do not behave differently.\nI suppose there is a bit of an oddity here - why can you do base[['id']] but not specify {'id': ...} in agg? The reason is because column selection can return multiple columns (e.g. in the example here, base[['product']] returns a DataFrame with two columns), whereas agg must have one column and one column only. Thus, it is necessary to specify all levels in agg.\n" ]
[ 0 ]
[]
[]
[ "pandas" ]
stackoverflow_0074417232_pandas.txt
Q: Avro, Hive or HBASE - What to use for 10 mio. records daily? I have the following requirements: i need to process per day around 20.000 elements (lets call them baskets) which generate each between 100 and 1.000 records (lets call them products in basket). A single record has about 10 columns, each row has about 500B - 1KB size (in total). That means, that i produce around 5 to max. 20 Mio. records per day. From analytical perspective i need to do some sum up, filtering, especially show trends over multiple days etc. The solution is Python based and i am able to use anything Hadoop, Microsoft SQL Server, Google Big Query etc. I am reading through lots of articles about Avro, Parquet, Hive, HBASE, etc. I tested in the first something small with SQL Server and two tables (one for the main elements and the other one the produced items over all days). But with this, the database get very fast quite large + it is not that fast when trying to acess, filter, etc. So i thought about using Avro and creating per day a single Avro file with the corresponding items. And when i want to analyse them, read them with Python or multiple of them, when i need to analyse multiple of them. When i think about this, this could be way to large (30 days files with each 10 mio. records) ... There must be something else. Then i came aroung HIVE and HBASE. But now i am totally confused. Anyone out there who can sort things in the right manner? What is the easiest or most general way to handle this kind of data? A: If you want to analyze data based on columns and aggregates, ORC or Parquet are better. If you don't plan on managing Hadoop infrastructure, then Hive or HBase wouldn't be acceptable. I agree a SQL Server might struggle with large queries... Out of the options listed, that narrows it down to BigQuery. If you want to explore alternative solutions in the same space, Apache Pinot or Druid support analytical use cases. Otherwise, throw files (as parquet or ORC) into GCS and use pyspark
Avro, Hive or HBASE - What to use for 10 mio. records daily?
I have the following requirements: i need to process per day around 20.000 elements (lets call them baskets) which generate each between 100 and 1.000 records (lets call them products in basket). A single record has about 10 columns, each row has about 500B - 1KB size (in total). That means, that i produce around 5 to max. 20 Mio. records per day. From analytical perspective i need to do some sum up, filtering, especially show trends over multiple days etc. The solution is Python based and i am able to use anything Hadoop, Microsoft SQL Server, Google Big Query etc. I am reading through lots of articles about Avro, Parquet, Hive, HBASE, etc. I tested in the first something small with SQL Server and two tables (one for the main elements and the other one the produced items over all days). But with this, the database get very fast quite large + it is not that fast when trying to acess, filter, etc. So i thought about using Avro and creating per day a single Avro file with the corresponding items. And when i want to analyse them, read them with Python or multiple of them, when i need to analyse multiple of them. When i think about this, this could be way to large (30 days files with each 10 mio. records) ... There must be something else. Then i came aroung HIVE and HBASE. But now i am totally confused. Anyone out there who can sort things in the right manner? What is the easiest or most general way to handle this kind of data?
[ "If you want to analyze data based on columns and aggregates, ORC or Parquet are better. If you don't plan on managing Hadoop infrastructure, then Hive or HBase wouldn't be acceptable. I agree a SQL Server might struggle with large queries... Out of the options listed, that narrows it down to BigQuery.\nIf you want to explore alternative solutions in the same space, Apache Pinot or Druid support analytical use cases.\nOtherwise, throw files (as parquet or ORC) into GCS and use pyspark\n" ]
[ 0 ]
[]
[]
[ "avro", "hbase", "hive", "parquet", "python" ]
stackoverflow_0074655522_avro_hbase_hive_parquet_python.txt
Q: Irrelevent topic comes out using BERTopic I'm currently trying to learn how to use BERTopic and new to python and was following a guide. The guide was originally for some sort of twitter data, but I figured I would try another set of data. When I use Abstract of article data, the result shows like this, and I figured one of the perks of using bertopic was not to preprocessing data, and there was no guide to preprocess data. can someone please share the ideas of why the result is like this? enter image description here I thought it shows certain topic A: Due to the TF-IDF like approach (c-TF-IDF) it typically removes stop words automatically if many topics were created. From your image, my guess would be that only two topics were created and that there are quite some stopwords in there. Removing those is quite straightforward and is not really a preprocessing approach as it removes the stop words after creating the clusters of semantically similar documents. We only have to do one of the following: from sklearn.feature_extraction.text import CountVectorizer vectorizer_model = CountVectorizer(stop_words="english") topic_model = BERTopic(vectorizer_model=vectorizer_model) or from bertopic import BERTopic from bertopic.vectorizers import ClassTfidfTransformer ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True) topic_model = BERTopic(ctfidf_model=ctfidf_model )
Irrelevent topic comes out using BERTopic
I'm currently trying to learn how to use BERTopic and new to python and was following a guide. The guide was originally for some sort of twitter data, but I figured I would try another set of data. When I use Abstract of article data, the result shows like this, and I figured one of the perks of using bertopic was not to preprocessing data, and there was no guide to preprocess data. can someone please share the ideas of why the result is like this? enter image description here I thought it shows certain topic
[ "Due to the TF-IDF like approach (c-TF-IDF) it typically removes stop words automatically if many topics were created. From your image, my guess would be that only two topics were created and that there are quite some stopwords in there. Removing those is quite straightforward and is not really a preprocessing approach as it removes the stop words after creating the clusters of semantically similar documents. We only have to do one of the following:\nfrom sklearn.feature_extraction.text import CountVectorizer\nvectorizer_model = CountVectorizer(stop_words=\"english\")\ntopic_model = BERTopic(vectorizer_model=vectorizer_model)\n\nor\nfrom bertopic import BERTopic\nfrom bertopic.vectorizers import ClassTfidfTransformer\n\nctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True)\ntopic_model = BERTopic(ctfidf_model=ctfidf_model )\n\n\n" ]
[ 0 ]
[]
[]
[ "bert_language_model" ]
stackoverflow_0074665638_bert_language_model.txt
Q: Sheets script - making changes to sheets other than the initial active sheet I have a script running that reformats borders on a variable range on the active sheet. I would like to have it cycle through other sheets in the same workbook, without those changes being visible to the user (ie - the GUI of the current screen stays as the only thing visible while the script runs). Currently each sheet is displayed as it loops and performs the reformat. How can I keep the initial sheet visible, and have the script run in the background? Obviously it is the use of ".setActiveSheet" and ".getActiveSheet" causing this. Still pretty new at all this, so any suggestions to otherwise cleanup/condense/speed up greatly appreciated. function allBorders(){ var spreadsheet = SpreadsheetApp.getActiveSpreadsheet(); var ss = SpreadsheetApp.getActiveSheet(); var sheetname = ss.getSheetName(); var range; var headrows = 3; var lr = ss.getLastRow() //last row with data var lc = ss.getLastColumn() var mr = ss.getMaxRows() //max possible rows var ns = SpreadsheetApp.getActiveSpreadsheet().getNumSheets() Logger.log("Number of sheets: "+ns) for (var i = 0; i < ns; i++){ Logger.log("i value: "+i) spreadsheet.setActiveSheet(spreadsheet.getSheets()[i]); sheetname = spreadsheet.getSheetName(); Logger.log("Sheetname: "+sheetname) switch(sheetname){ case "Openings": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; case "My Trips": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; case "All_Trips": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; default: break; } range = ss.getRange((1+headrows),1,mr,lc) //clear all rows below header range.setBorder(false,false,false,false,false,false); range = ss.getRange((1+headrows),1,(lr-headrows),lc) //border active rows range.setBorder(true, true, true, true, true, true); } } A: All sheets except the first function lfunko() { const ss = SpreadsheetApp.getActive(); const allshtsexceptthefirst = ss.getSheets().filter((_,i) => i > 0).map(sh => sh.getName()) Logger.log(allshtsexceptthefirst) } A: Use Array.forEach(), like this: function surplusBorders() { const sheetsToFormatRegex = /./i; const ss = SpreadsheetApp.getActive(); ss.toast('Formatting borders...'); let numFormattedSheets = 0; ss.getSheets().forEach(sheet => { if (sheet.getName().match(sheetsToFormatRegex)) { sheet.getRange('A1:Z').setBorder(false, false, false, false, false, false); sheet.getRange(`A${sheet.getFrozenRows() + 1}:Z${sheet.getLastRow()}`).setBorder(true, true, true, true, true, true); numFormattedSheets += 1; } }); ss.toast(`Done. Formatted borders in ${numFormattedSheets} sheets.`); } To only format some subset of the sheets, adjust the sheetsToFormatRegex regular expression. A: Found one way around the main problem. Apparently G doesn't have the ability to perform a "when selected" trigger for sheets. But the workaround posted here seems to accomplish that. And I can simplify/speed up the redraw portion a lot by eliminating the loops. Trigger on sheet change
Sheets script - making changes to sheets other than the initial active sheet
I have a script running that reformats borders on a variable range on the active sheet. I would like to have it cycle through other sheets in the same workbook, without those changes being visible to the user (ie - the GUI of the current screen stays as the only thing visible while the script runs). Currently each sheet is displayed as it loops and performs the reformat. How can I keep the initial sheet visible, and have the script run in the background? Obviously it is the use of ".setActiveSheet" and ".getActiveSheet" causing this. Still pretty new at all this, so any suggestions to otherwise cleanup/condense/speed up greatly appreciated. function allBorders(){ var spreadsheet = SpreadsheetApp.getActiveSpreadsheet(); var ss = SpreadsheetApp.getActiveSheet(); var sheetname = ss.getSheetName(); var range; var headrows = 3; var lr = ss.getLastRow() //last row with data var lc = ss.getLastColumn() var mr = ss.getMaxRows() //max possible rows var ns = SpreadsheetApp.getActiveSpreadsheet().getNumSheets() Logger.log("Number of sheets: "+ns) for (var i = 0; i < ns; i++){ Logger.log("i value: "+i) spreadsheet.setActiveSheet(spreadsheet.getSheets()[i]); sheetname = spreadsheet.getSheetName(); Logger.log("Sheetname: "+sheetname) switch(sheetname){ case "Openings": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; case "My Trips": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; case "All_Trips": ss = SpreadsheetApp.getActiveSheet(); headrows = 6; lr = ss.getLastRow() //last row with data lc = ss.getLastColumn() mr = ss.getMaxRows() //max possible rows break; default: break; } range = ss.getRange((1+headrows),1,mr,lc) //clear all rows below header range.setBorder(false,false,false,false,false,false); range = ss.getRange((1+headrows),1,(lr-headrows),lc) //border active rows range.setBorder(true, true, true, true, true, true); } }
[ "All sheets except the first\nfunction lfunko() {\n const ss = SpreadsheetApp.getActive();\n const allshtsexceptthefirst = ss.getSheets().filter((_,i) => i > 0).map(sh => sh.getName())\n Logger.log(allshtsexceptthefirst)\n}\n\n", "Use Array.forEach(), like this:\nfunction surplusBorders() {\n const sheetsToFormatRegex = /./i;\n const ss = SpreadsheetApp.getActive();\n ss.toast('Formatting borders...');\n let numFormattedSheets = 0;\n ss.getSheets().forEach(sheet => {\n if (sheet.getName().match(sheetsToFormatRegex)) {\n sheet.getRange('A1:Z').setBorder(false, false, false, false, false, false);\n sheet.getRange(`A${sheet.getFrozenRows() + 1}:Z${sheet.getLastRow()}`).setBorder(true, true, true, true, true, true);\n numFormattedSheets += 1;\n }\n });\n ss.toast(`Done. Formatted borders in ${numFormattedSheets} sheets.`);\n}\n\nTo only format some subset of the sheets, adjust the sheetsToFormatRegex regular expression.\n", "Found one way around the main problem. Apparently G doesn't have the ability to perform a \"when selected\" trigger for sheets. But the workaround posted here seems to accomplish that. And I can simplify/speed up the redraw portion a lot by eliminating the loops.\nTrigger on sheet change\n" ]
[ 0, 0, 0 ]
[]
[]
[ "google_apps_script", "google_sheets" ]
stackoverflow_0074668679_google_apps_script_google_sheets.txt
Q: Pointcut on @Query annotation of JPA repository I'm trying to add a custom annotation for JPA repository methods to have a advice on @Query value. Below is the piece of code I tried MyFilterAspect class @Aspect @Component public class MyFilterAspect { @Pointcut("execution(* *(..)) && @within(org.springframework.data.jpa.repository.Query)") private void createQuery(){} @Around("createQuery()") public void applyFilter(JointPoint jp) { } } The Respository code @MyFilter @Query(Select * ...) MyObject findByNameAndClass(...) So I keep getting error createQuery() is never called At MyFilterAspect I'm trying to update the Query value using the advice. What am I doing wrong? A: To capture the annotation I often use this pattern: "execution(@AnnotationToCapture * *(..)) && @annotation(annotationParam)" Then in the proceeding method, you can have the annotation as parameter: (..., AnnotationToCapture annotationParam, ...) A: You should not use the @within annotation for this purpose. Instead, you should use @annotation, as follows: @Pointcut("execution(* *(..)) && @annotation(org.springframework.data.jpa.repository.Query)") private void createQuery(){} Also, you should use JoinPoint to access the method signature and then you can extract the annotation from the signature. A: your class should look as follows @Aspect @Component public class MyFilterAspect { @Pointcut("execution(* *(..)) && @annotation(org.springframework.data.jpa.repository.Query)") private void createQuery(){} @Around("createQuery()") public void applyFilter(ProceedingJoinPoint jp) { } }
Pointcut on @Query annotation of JPA repository
I'm trying to add a custom annotation for JPA repository methods to have a advice on @Query value. Below is the piece of code I tried MyFilterAspect class @Aspect @Component public class MyFilterAspect { @Pointcut("execution(* *(..)) && @within(org.springframework.data.jpa.repository.Query)") private void createQuery(){} @Around("createQuery()") public void applyFilter(JointPoint jp) { } } The Respository code @MyFilter @Query(Select * ...) MyObject findByNameAndClass(...) So I keep getting error createQuery() is never called At MyFilterAspect I'm trying to update the Query value using the advice. What am I doing wrong?
[ "To capture the annotation I often use this pattern:\n \"execution(@AnnotationToCapture * *(..)) && @annotation(annotationParam)\"\n\nThen in the proceeding method, you can have the annotation as parameter:\n(..., AnnotationToCapture annotationParam, ...)\n\n", "You should not use the @within annotation for this purpose. Instead, you should use @annotation, as follows:\n@Pointcut(\"execution(* *(..)) && @annotation(org.springframework.data.jpa.repository.Query)\")\nprivate void createQuery(){}\n\nAlso, you should use JoinPoint to access the method signature and then you can extract the annotation from the signature.\n", "your class should look as follows\n @Aspect\n @Component\n public class MyFilterAspect {\n @Pointcut(\"execution(* *(..)) && @annotation(org.springframework.data.jpa.repository.Query)\")\n private void createQuery(){}\n\n @Around(\"createQuery()\")\n public void applyFilter(ProceedingJoinPoint jp) {\n }\n}\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "aspect", "jpa", "spring_aop", "spring_data_jpa", "sql" ]
stackoverflow_0074677485_aspect_jpa_spring_aop_spring_data_jpa_sql.txt
Q: How to add click event on a cell in AgGrid I'm using AgGrid v28 with Angular 14. I've a very simple requirement. On click of details values I have to call a method. Screenshot: My code: columnDefs = [ {...}, { headerName: 'Details', field: 'partDescription', filter: false, minWidth: 50, cellRendererParams: { onClick: this.showPartsHierarchcy.bind(this), label: 'partDescription', icon: '', }, }, ] showPartsHierarchcy() { ... } The details column is populated but on click my method showPartsHierarchcy is not called. Please correct my mistake. I tried: This answer. cellRenderer: function(params: any) { return '<a (click)="showPartsHierarchcy">'+ params.value+'</a>' } No luck. Please help me. A: I looked more deeply into the documentation of Grid Events of AgGrid. This is what I changed in my code: { headerName: 'Details', field: 'partDescription', filter: false, minWidth: 50, cellStyle: function (params: any) { return { color: '#416AD3' }; }, onCellClicked: (event: CellClickedEvent) => console.log('Cell was clicked'), }, The callback for the cellClicked event is gridOptions.onCellClicked. const gridOptions = { // Add event handlers onCellClicked: (event: CellClickedEvent) => console.log('Cell was clicked'), } No cellRenderer required. Replace it with onCellClicked.
How to add click event on a cell in AgGrid
I'm using AgGrid v28 with Angular 14. I've a very simple requirement. On click of details values I have to call a method. Screenshot: My code: columnDefs = [ {...}, { headerName: 'Details', field: 'partDescription', filter: false, minWidth: 50, cellRendererParams: { onClick: this.showPartsHierarchcy.bind(this), label: 'partDescription', icon: '', }, }, ] showPartsHierarchcy() { ... } The details column is populated but on click my method showPartsHierarchcy is not called. Please correct my mistake. I tried: This answer. cellRenderer: function(params: any) { return '<a (click)="showPartsHierarchcy">'+ params.value+'</a>' } No luck. Please help me.
[ "I looked more deeply into the documentation of Grid Events of AgGrid. This is what I changed in my code:\n {\n headerName: 'Details',\n field: 'partDescription',\n filter: false,\n minWidth: 50,\n cellStyle: function (params: any) {\n return { color: '#416AD3' };\n },\n onCellClicked: (event: CellClickedEvent) =>\n console.log('Cell was clicked'),\n },\n\nThe callback for the cellClicked event is gridOptions.onCellClicked.\nconst gridOptions = {\n // Add event handlers\n onCellClicked: (event: CellClickedEvent) => console.log('Cell was clicked'),\n}\n\nNo cellRenderer required. Replace it with onCellClicked.\n" ]
[ 0 ]
[]
[]
[ "ag_grid", "angular" ]
stackoverflow_0074677788_ag_grid_angular.txt
Q: Why I get the exception when spring security try to access the mongodb? I use it in my Java project Spring security and MongoDB. On this row: Authentication authenticate = authenticationManager.authenticate(authentication); In method attemptAuthentication which is a member of JwtEmailAndPasswordAuthenticationFilter class I get this error: org.springframework.security.authentication.BadCredentialsException: Bad credentials Here is the connection string to the MongoDB: spring: data: mongodb: uri: mongodb://localhost:27017/?readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false I did not set any credentials on MongoDB, MongoDB can be accessed without creds. Any idea why I get the exception above if no credentials are set on the database? A: This error is not coming due to MongoDB access error. This is due to the username or password mismatches inside the authenticate method The BadCredentialsException is thrown by the authenticationManager when it is unable to authenticate the provided credentials. This can happen for a number of reasons, such as when the username or password is incorrect, or when the account is locked or disabled. In the context of your code, it looks like you are trying to authenticate a user by calling the authenticate method on the authenticationManager. This method takes an Authentication object as a parameter, which represents the user's credentials. If the authenticationManager is unable to authenticate the user based on the provided credentials, it will throw a BadCredentialsException. To fix this error, you will need to make sure that the Authentication object that you pass to the authenticate method contains the correct username and password for the user that you are trying to authenticate. You may also need to check that the user's account is not locked or disabled, and that it has the appropriate permissions to access the resources that you are trying to protect.
Why I get the exception when spring security try to access the mongodb?
I use it in my Java project Spring security and MongoDB. On this row: Authentication authenticate = authenticationManager.authenticate(authentication); In method attemptAuthentication which is a member of JwtEmailAndPasswordAuthenticationFilter class I get this error: org.springframework.security.authentication.BadCredentialsException: Bad credentials Here is the connection string to the MongoDB: spring: data: mongodb: uri: mongodb://localhost:27017/?readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false I did not set any credentials on MongoDB, MongoDB can be accessed without creds. Any idea why I get the exception above if no credentials are set on the database?
[ "This error is not coming due to MongoDB access error.\nThis is due to the username or password mismatches inside the authenticate method\nThe BadCredentialsException is thrown by the authenticationManager when it is unable to authenticate the provided credentials. This can happen for a number of reasons, such as when the username or password is incorrect, or when the account is locked or disabled.\nIn the context of your code, it looks like you are trying to authenticate a user by calling the authenticate method on the authenticationManager. This method takes an Authentication object as a parameter, which represents the user's credentials. If the authenticationManager is unable to authenticate the user based on the provided credentials, it will throw a BadCredentialsException.\nTo fix this error, you will need to make sure that the Authentication object that you pass to the authenticate method contains the correct username and password for the user that you are trying to authenticate. You may also need to check that the user's account is not locked or disabled, and that it has the appropriate permissions to access the resources that you are trying to protect.\n" ]
[ 0 ]
[ "There could be several reasons why you are getting the BadCredentialsException exception when trying to access MongoDB with Spring security.\nFirst, it is possible that you are using incorrect credentials when trying to authenticate. If you are not setting any credentials on MongoDB, you should not provide any credentials when authenticating with Spring security. If you are providing incorrect credentials, it is possible that the authentication process is failing and throwing the BadCredentialsException exception.\nSecond, it is possible that the MongoDB server is not running or is not accessible from your Java application. If the server is not running or is not accessible, the authentication process will fail and throw the BadCredentialsException exception.\nFinally, it is possible that the connection string in your Spring configuration is incorrect or incomplete. If the connection string is incorrect or incomplete, the authentication process will fail and throw the BadCredentialsException exception.\nTo troubleshoot this issue, you can try the following steps:\nCheck if the MongoDB server is running and is accessible from your Java application. You can do this by trying to connect to the server using the mongodb command line tool or by using a MongoDB GUI tool such as Compass.\nCheck if you are providing the correct credentials when authenticating with Spring security. If you are not setting any credentials on MongoDB, you should not provide any credentials when authenticating with Spring security.\nCheck if the connection string in your Spring configuration is correct and complete. You can check the connection string by logging it out or by comparing it to a working connection string.\nOnce you have identified and resolved the issue, the BadCredentialsException exception should no longer occur when trying to access MongoDB with Spring security.\n" ]
[ -1 ]
[ "java", "mongodb", "spring", "spring_security" ]
stackoverflow_0074454657_java_mongodb_spring_spring_security.txt
Q: Circularity Calculation with Perimeter & Area of a Simple Circle Circularity signifies the comparability of the shape to a circle. A measure of circularity is the shape area to the circle area ratio having an identical perimeter (we denote it as Circle Area) as represented in equation below. Sample Circularity = Sample Area / Circle Area Let the perimeter of shape be P, so P = 2 * pi * r then P^2 = 4 * pi^2 r^2 = 4 * pi * (pi * r^2) = 4 * pi * Circle Area. Thus Circle Area = Sample Perimeter^2 / (4 * pi) which implies Sample Circularity = (4 * pi * Sample Area) / (Sample Perimeter^2) So with help of math, there is no need to find an algorithm to calculate fit circle or draw it on a right way over shape or etc. This statistic equals 1 for a circular object and less than 1 for an object that departs from circularity, except that it is relatively insensitive to irregular boundaries. ok, that's fine, but ... . In python i try calculate circularity for a simple circle but always i got 1.11. My python approach is: import cv2 import math Gray_image = cv2.imread(Input_Path, cv2.IMREAD_GRAYSCALE) cnt , her = cv2.findContours(Gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) Perimeter = cv2.arcLength(cnt[0], True) Area = cv2.contourArea(cnt[0]) Circularity = math.pow(Perimeter, 2) / (4 * math.pi * Area) print(round(Circularity , 2)) If i use Perimeter = len(cnt[0]) then answer is 0.81 which is incorrect again. Thank you for taking the time to answer. To draw a circle, use following command: import cv2 import numpy as np Fill_Circle = np.zeros((1000, 1000, 3)) cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1) cv2.imwrite(Path_to_Save, Fill_Circle) A: As I mentioned in this recent answer to a related question, OpenCV's perimeter estimate is not good enough to compute the circularity feature. OpenCV computes the perimeter by adding up all the distances between vertices of the polygon built from the edge pixels of the image. This length is typically larger than the actual perimeter of the actual object imaged. This blog post of mine describes the problem well, and provides a better way to estimate the perimeter of an object from a binary image. This better method is implemented (among other places) in DIPlib, in the function dip.MeasurementTool.Measure(), as the feature "Perimeter". [Disclosure: I'm an author of DIPlib]. The feature "Roundness" implements what you refer to a circularity here (these feature names are used interchangeably in the literature). There is a different feature referred to as "Circularity" in DIPlib, which does not depend on the perimeter and typically is more precise if the shape is close to a circle. This is how you would use that function: import diplib as dip import cv2 import numpy as np Fill_Circle = np.zeros((1000, 1000, 3)) cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1) labels = dip.Label(Fill_Circle[:, :, 0] > 0) msr = dip.MeasurementTool.Measure(labels, features=["Perimeter", "Size", "Roundness", "Circularity"]) print(msr) Circularity = msr[1]["Roundness"][0] For your circle, I see: area = 636121.0 perimeter = 2829.27 roundness = 0.9986187 (this is what you refer to as circularity) circularity = 0.0005368701 (closer to 0 means more like a circle)
Circularity Calculation with Perimeter & Area of a Simple Circle
Circularity signifies the comparability of the shape to a circle. A measure of circularity is the shape area to the circle area ratio having an identical perimeter (we denote it as Circle Area) as represented in equation below. Sample Circularity = Sample Area / Circle Area Let the perimeter of shape be P, so P = 2 * pi * r then P^2 = 4 * pi^2 r^2 = 4 * pi * (pi * r^2) = 4 * pi * Circle Area. Thus Circle Area = Sample Perimeter^2 / (4 * pi) which implies Sample Circularity = (4 * pi * Sample Area) / (Sample Perimeter^2) So with help of math, there is no need to find an algorithm to calculate fit circle or draw it on a right way over shape or etc. This statistic equals 1 for a circular object and less than 1 for an object that departs from circularity, except that it is relatively insensitive to irregular boundaries. ok, that's fine, but ... . In python i try calculate circularity for a simple circle but always i got 1.11. My python approach is: import cv2 import math Gray_image = cv2.imread(Input_Path, cv2.IMREAD_GRAYSCALE) cnt , her = cv2.findContours(Gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) Perimeter = cv2.arcLength(cnt[0], True) Area = cv2.contourArea(cnt[0]) Circularity = math.pow(Perimeter, 2) / (4 * math.pi * Area) print(round(Circularity , 2)) If i use Perimeter = len(cnt[0]) then answer is 0.81 which is incorrect again. Thank you for taking the time to answer. To draw a circle, use following command: import cv2 import numpy as np Fill_Circle = np.zeros((1000, 1000, 3)) cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1) cv2.imwrite(Path_to_Save, Fill_Circle)
[ "As I mentioned in this recent answer to a related question, OpenCV's perimeter estimate is not good enough to compute the circularity feature. OpenCV computes the perimeter by adding up all the distances between vertices of the polygon built from the edge pixels of the image. This length is typically larger than the actual perimeter of the actual object imaged. This blog post of mine describes the problem well, and provides a better way to estimate the perimeter of an object from a binary image.\nThis better method is implemented (among other places) in DIPlib, in the function dip.MeasurementTool.Measure(), as the feature \"Perimeter\". [Disclosure: I'm an author of DIPlib].\nThe feature \"Roundness\" implements what you refer to a circularity here (these feature names are used interchangeably in the literature). There is a different feature referred to as \"Circularity\" in DIPlib, which does not depend on the perimeter and typically is more precise if the shape is close to a circle.\nThis is how you would use that function:\nimport diplib as dip\nimport cv2\nimport numpy as np\n\nFill_Circle = np.zeros((1000, 1000, 3))\ncv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1)\n\nlabels = dip.Label(Fill_Circle[:, :, 0] > 0)\nmsr = dip.MeasurementTool.Measure(labels, features=[\"Perimeter\", \"Size\", \"Roundness\", \"Circularity\"])\nprint(msr)\n\nCircularity = msr[1][\"Roundness\"][0]\n\nFor your circle, I see:\n\narea = 636121.0\nperimeter = 2829.27\nroundness = 0.9986187 (this is what you refer to as circularity)\ncircularity = 0.0005368701 (closer to 0 means more like a circle)\n\n" ]
[ 1 ]
[]
[]
[ "image_processing", "opencv", "python" ]
stackoverflow_0074580811_image_processing_opencv_python.txt
Q: how to submit a form in react native and submit to a post api i am compleltly new to react native so i dont know whthere this is the correct method to submit a form here i am trying to save the values from all input fields to getdata() but i am getting undefided value export default function Signupfor(props) { // const phoneInput = useRef < PhoneInput > null; const [text, setTextname] = useState(); function getdata() { console.log('dsd'); console.log(text); } const {userInfo, log} = props?.route?.params; console.log(log.name); return ( <View style={styles.prheight}> <View style={styles.form}> <Text style={styles.r}>One Last Step</Text> <TextInput style={styles.forminput} label="Name" value={userInfo.user.name} onChangeText={text => setTextname(text)} /> <TextInput style={styles.forminput} label="Email" value={userInfo.user.email} onChangeText={text => setTextemail(text)} /> <TextInput style={styles.forminput} label="Whatsapp Number" keyboardType="numeric" value={userInfo.user.number} onChangeText={text => setTextnumber(text)} // value={this.state.myNumber} maxLength={10} //setting limit of input /> </View> <View style={styles.buttonw}> <Button color="#7743DB" title="Lets Go" onPress={() => getdata()} /> </View> </View> ); } here name and email should not be able to edit i want to pass the exact value to getdata() A: You can use a library like https://react-hook-form.com to check an example with react native on video. Or you can right it yourself, in the example below any time you need access to input values you can read it from text and number const UselessTextInput = () => { const [text, onChangeText] = useState("Useless Text"); const [number, onChangeNumber] = useState(null); return ( <SafeAreaView> <TextInput style={styles.input} onChangeText={onChangeText} value={text} /> <TextInput style={styles.input} onChangeText={onChangeNumber} value={number} placeholder="useless placeholder" keyboardType="numeric" /> </SafeAreaView> ); };
how to submit a form in react native and submit to a post api
i am compleltly new to react native so i dont know whthere this is the correct method to submit a form here i am trying to save the values from all input fields to getdata() but i am getting undefided value export default function Signupfor(props) { // const phoneInput = useRef < PhoneInput > null; const [text, setTextname] = useState(); function getdata() { console.log('dsd'); console.log(text); } const {userInfo, log} = props?.route?.params; console.log(log.name); return ( <View style={styles.prheight}> <View style={styles.form}> <Text style={styles.r}>One Last Step</Text> <TextInput style={styles.forminput} label="Name" value={userInfo.user.name} onChangeText={text => setTextname(text)} /> <TextInput style={styles.forminput} label="Email" value={userInfo.user.email} onChangeText={text => setTextemail(text)} /> <TextInput style={styles.forminput} label="Whatsapp Number" keyboardType="numeric" value={userInfo.user.number} onChangeText={text => setTextnumber(text)} // value={this.state.myNumber} maxLength={10} //setting limit of input /> </View> <View style={styles.buttonw}> <Button color="#7743DB" title="Lets Go" onPress={() => getdata()} /> </View> </View> ); } here name and email should not be able to edit i want to pass the exact value to getdata()
[ "You can use a library like https://react-hook-form.com to check an example with react native on video.\nOr you can right it yourself, in the example below any time you need access to input values you can read it from text and number\nconst UselessTextInput = () => {\n const [text, onChangeText] = useState(\"Useless Text\");\n const [number, onChangeNumber] = useState(null);\n\n return (\n <SafeAreaView>\n <TextInput\n style={styles.input}\n onChangeText={onChangeText}\n value={text}\n />\n <TextInput\n style={styles.input}\n onChangeText={onChangeNumber}\n value={number}\n placeholder=\"useless placeholder\"\n keyboardType=\"numeric\"\n />\n </SafeAreaView>\n );\n};\n\n" ]
[ 1 ]
[]
[]
[ "react_native" ]
stackoverflow_0074664536_react_native.txt
Q: How to use sox mixing multi-file? I know Sox command line mixing, use -m. Now I want to call libsox through the c/c++ code. How do I mix multiple (more than 3) mono files into one mono file through the Sox header file? I don't understand the source code very well. If you can give me some guidance, I would be very grateful; Or the principle explanation of its implementation of mixing use c/c++;not command line; A: #include <sox.h> #include <string> #include <vector> // define the mixing mode static const sox_combine_mode_t mixing_mode = SOX_COMBINE_MIX; // define the list of input files std::vector<std::string> input_files = { "file1.wav", "file2.wav", "file3.wav" }; // define the output file std::string output_file = "output.wav"; int main() { // initialize the libsox library sox_init(); // create a sox_format_t struct for each input file std::vector<sox_format_t *> input_formats; for (const std::string &file : input_files) { input_formats.push_back(sox_open_read(file.c_str(), NULL, NULL, NULL)); } // create a sox_format_t struct for the output file sox_format_t *output_format = sox_open_write(output_file.c_str(), NULL, NULL, NULL, NULL, NULL); // mix the input files into the output file sox_combine_inputs(output_format, input_formats.data(), input_formats.size(), mixing_mode); // close the input and output files for (sox_format_t *format : input_formats) { sox_close(format); } sox_close(output_format); // quit the libsox library sox_quit(); return 0; } sox_combine_inputs function to mix multiple mono files into a single mono file The sox_combine_inputs function supports various mixing modes, including SOX_COMBINE_MIX, SOX_COMBINE_CONCATENATE, and SOX_COMBINE_FIRST, among others. You can choose the mixing mode that best fits your requirements. Here - SOX_COMBINE_MIX was used.
How to use sox mixing multi-file?
I know Sox command line mixing, use -m. Now I want to call libsox through the c/c++ code. How do I mix multiple (more than 3) mono files into one mono file through the Sox header file? I don't understand the source code very well. If you can give me some guidance, I would be very grateful; Or the principle explanation of its implementation of mixing use c/c++;not command line;
[ "#include <sox.h>\n#include <string>\n#include <vector>\n\n// define the mixing mode\nstatic const sox_combine_mode_t mixing_mode = SOX_COMBINE_MIX;\n\n// define the list of input files\nstd::vector<std::string> input_files = { \"file1.wav\", \"file2.wav\", \"file3.wav\" };\n\n// define the output file\nstd::string output_file = \"output.wav\";\n\nint main() {\n // initialize the libsox library\n sox_init();\n\n // create a sox_format_t struct for each input file\n std::vector<sox_format_t *> input_formats;\n for (const std::string &file : input_files) {\n input_formats.push_back(sox_open_read(file.c_str(), NULL, NULL, NULL));\n }\n\n // create a sox_format_t struct for the output file\n sox_format_t *output_format = sox_open_write(output_file.c_str(), NULL, NULL, NULL, NULL, NULL);\n\n // mix the input files into the output file\n sox_combine_inputs(output_format, input_formats.data(), input_formats.size(), mixing_mode);\n\n // close the input and output files\n for (sox_format_t *format : input_formats) {\n sox_close(format);\n }\n sox_close(output_format);\n\n // quit the libsox library\n sox_quit();\n\n return 0;\n}\n\nsox_combine_inputs function to mix multiple mono files into a single mono file\nThe sox_combine_inputs function supports various mixing modes, including SOX_COMBINE_MIX, SOX_COMBINE_CONCATENATE, and SOX_COMBINE_FIRST, among others. You can choose the mixing mode that best fits your requirements. Here - SOX_COMBINE_MIX was used.\n" ]
[ 0 ]
[]
[]
[ "audio", "mixing", "sox" ]
stackoverflow_0073639092_audio_mixing_sox.txt
Q: `no cached resource available for offline mode` when publishing to Maven Central I've published 5 versions of my repository so far without any issues. With version 1.0.5 I'm getting an error: Execution failed for task ':publishMavenJavaPublicationToOSSRHRepository'. > Failed to publish publication 'mavenJava' to repository 'OSSRH' > No cached resource 'https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/io/github/jspinak/brobot/1.0.5-SNAPSHOT/maven-metadata.xml' available for offline mode. The only help I've found online is to toggle the offline mode in Gradle (No Cached Version Gradle Plugin Available for offline mode), which then produces the following error when publishing: Could not GET 'https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/io/github/jspinak/brobot/1.0.5-SNAPSHOT/maven-metadata.xml'. Received status code 400 from server: Bad Request Disable Gradle 'offline mode' and sync project I'm not sure if this is something I've done wrong, a Sonatype issue, a Gradle issue, an Intellij issue, or something else. I've also posted on the Sonatype message boards just in case. In the Gradle Toolbar, there is an option generateMetadataFileForMavenJavaPublication. Running this doesn't seem to change anything. This is an open source repository and you can see the build.gradle file at https://github.com/jspinak/brobot/blob/main/library/build.gradle. A: This error occurs when trying to publish a snapshot version to the staging repository, which is for release versions. A snapshot differs from a release version in that it can be changed. Release versions cannot be changed to give developers who are using it the security that the functionality of their code will not change due to changes in the dependencies. Snapshots can be used for testing or to provide easy access to the latest code while still in development. Any version name with "-SNAPSHOT" at the end is considered a snapshot. All other names are considered release versions. To publish a snapshot I would need the following lines of code in my build.gradle file: version = '1.0.5-SNAPSHOT' url = "https://s01.oss.sonatype.org/content/repositories/snapshots/" To publish a release version, I would need the following code: version = '1.0.5' url = "https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/" A: It looks like you are encountering a common issue with publishing to Sonatype OSSRH. It seems that the error message you are seeing is due to a problem with the metadata file for your project. One potential solution is to manually generate the metadata file for your project. You can do this by running the generateMetadataFileForMavenJavaPublication Gradle task. This will create a maven-metadata.xml file in the build/publications/mavenJava/ directory. You can then try publishing your project again and see if that resolves the issue. You can also try disabling the Gradle offline mode and syncing your project. This may resolve the issue by allowing Gradle to fetch the necessary metadata from the remote repository. If neither of these solutions work, you may want to try cleaning your local repository and rebuilding your project. This can be done by running the clean and build tasks in Gradle.
`no cached resource available for offline mode` when publishing to Maven Central
I've published 5 versions of my repository so far without any issues. With version 1.0.5 I'm getting an error: Execution failed for task ':publishMavenJavaPublicationToOSSRHRepository'. > Failed to publish publication 'mavenJava' to repository 'OSSRH' > No cached resource 'https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/io/github/jspinak/brobot/1.0.5-SNAPSHOT/maven-metadata.xml' available for offline mode. The only help I've found online is to toggle the offline mode in Gradle (No Cached Version Gradle Plugin Available for offline mode), which then produces the following error when publishing: Could not GET 'https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/io/github/jspinak/brobot/1.0.5-SNAPSHOT/maven-metadata.xml'. Received status code 400 from server: Bad Request Disable Gradle 'offline mode' and sync project I'm not sure if this is something I've done wrong, a Sonatype issue, a Gradle issue, an Intellij issue, or something else. I've also posted on the Sonatype message boards just in case. In the Gradle Toolbar, there is an option generateMetadataFileForMavenJavaPublication. Running this doesn't seem to change anything. This is an open source repository and you can see the build.gradle file at https://github.com/jspinak/brobot/blob/main/library/build.gradle.
[ "This error occurs when trying to publish a snapshot version to the staging repository, which is for release versions. A snapshot differs from a release version in that it can be changed. Release versions cannot be changed to give developers who are using it the security that the functionality of their code will not change due to changes in the dependencies. Snapshots can be used for testing or to provide easy access to the latest code while still in development. Any version name with \"-SNAPSHOT\" at the end is considered a snapshot. All other names are considered release versions.\nTo publish a snapshot I would need the following lines of code in my build.gradle file:\nversion = '1.0.5-SNAPSHOT'\nurl = \"https://s01.oss.sonatype.org/content/repositories/snapshots/\"\nTo publish a release version, I would need the following code:\nversion = '1.0.5'\nurl = \"https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/\"\n", "It looks like you are encountering a common issue with publishing to Sonatype OSSRH. It seems that the error message you are seeing is due to a problem with the metadata file for your project.\nOne potential solution is to manually generate the metadata file for your project. You can do this by running the generateMetadataFileForMavenJavaPublication Gradle task. This will create a maven-metadata.xml file in the build/publications/mavenJava/ directory. You can then try publishing your project again and see if that resolves the issue.\nYou can also try disabling the Gradle offline mode and syncing your project. This may resolve the issue by allowing Gradle to fetch the necessary metadata from the remote repository.\nIf neither of these solutions work, you may want to try cleaning your local repository and rebuilding your project. This can be done by running the clean and build tasks in Gradle.\n" ]
[ 0, 0 ]
[]
[]
[ "gradle", "intellij_idea", "maven_central", "sonatype" ]
stackoverflow_0074641137_gradle_intellij_idea_maven_central_sonatype.txt
Q: selenium-webdriver script isn't finding local file I am unable to get a nodejs script loading Chrome to load a local file. It works on 18.04 but not 22.04. Has there been some significant change that would affect local file loading syntax, or is there something wrong in my code? const { Builder } = require('selenium-webdriver') async function start() { const chrome = require('selenium-webdriver/chrome') const options = new chrome.Options() options.addArguments('--disable-dev-shm-usage') options.addArguments('--no-sandbox') const driver = new Builder() .forBrowser('chrome') .setChromeOptions(options) .build() await driver.get('file://' + __dirname + '/myfile.html') await driver.sleep(10000) const text = await driver.executeScript('return document.documentElement.innerText') console.log(text) driver.quit() } start() The result is: Your file couldn’t be accessed It may have been moved, edited or deleted. ERR_FILE_NOT_FOUND I can confirm that myfile.html is definitely present. Using console.log to show the file argument value shows it is the same for both older and newer Ubuntu. Changing the driver.get argument to a website, e.g. https://www.google.com/ correctly shows the webpage content in the output. The local file code fails as above using: Ubuntu 22.04.1 LTS Node v16.15.0 chromium-browser Chromium 108.0.5359.71 snap It works fine on: Ubuntu 18.04.6 LTS Node v16.15.0 Chromium 107.0.5304.87 Built on Ubuntu , running on Ubuntu 18.04 A: This seems to be because the default Ubuntu 22.04 Chrome package is a snap package that limits local file access to /home/ . Switching to the .deb distribution from Google solves the issue. The Chromedriver also needs to match. # Chrome browser - .deb package, not the standard OS Snap which limits access to only /home/ # See: https://askubuntu.com/questions/1184357/why-cant-chromium-suddenly-access-any-partition-except-for-home # See: https://www.ubuntuupdates.org/ppa/google_chrome?dist=stable wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' apt-get update apt-get install google-chrome-stable # Chromedriver, matching version number # See: https://skolo.online/documents/webscrapping/#pre-requisites # See step 3 at: https://tecadmin.net/setup-selenium-chromedriver-on-ubuntu/ # See: https://sites.google.com/chromium.org/driver/ #!# NB This will become out of sync if apt-get updates google-chrome-stable google-chrome --version wget https://chromedriver.storage.googleapis.com/108.0.5359.71/chromedriver_linux64.zip unzip chromedriver_linux64.zip sudo mv chromedriver /usr/bin/chromedriver sudo chown root:root /usr/bin/chromedriver sudo chmod +x /usr/bin/chromedriver rm chromedriver_linux64.zip
selenium-webdriver script isn't finding local file
I am unable to get a nodejs script loading Chrome to load a local file. It works on 18.04 but not 22.04. Has there been some significant change that would affect local file loading syntax, or is there something wrong in my code? const { Builder } = require('selenium-webdriver') async function start() { const chrome = require('selenium-webdriver/chrome') const options = new chrome.Options() options.addArguments('--disable-dev-shm-usage') options.addArguments('--no-sandbox') const driver = new Builder() .forBrowser('chrome') .setChromeOptions(options) .build() await driver.get('file://' + __dirname + '/myfile.html') await driver.sleep(10000) const text = await driver.executeScript('return document.documentElement.innerText') console.log(text) driver.quit() } start() The result is: Your file couldn’t be accessed It may have been moved, edited or deleted. ERR_FILE_NOT_FOUND I can confirm that myfile.html is definitely present. Using console.log to show the file argument value shows it is the same for both older and newer Ubuntu. Changing the driver.get argument to a website, e.g. https://www.google.com/ correctly shows the webpage content in the output. The local file code fails as above using: Ubuntu 22.04.1 LTS Node v16.15.0 chromium-browser Chromium 108.0.5359.71 snap It works fine on: Ubuntu 18.04.6 LTS Node v16.15.0 Chromium 107.0.5304.87 Built on Ubuntu , running on Ubuntu 18.04
[ "This seems to be because the default Ubuntu 22.04 Chrome package is a snap package that limits local file access to /home/ .\nSwitching to the .deb distribution from Google solves the issue. The Chromedriver also needs to match.\n# Chrome browser - .deb package, not the standard OS Snap which limits access to only /home/\n# See: https://askubuntu.com/questions/1184357/why-cant-chromium-suddenly-access-any-partition-except-for-home\n# See: https://www.ubuntuupdates.org/ppa/google_chrome?dist=stable\nwget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -\nsh -c 'echo \"deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main\" >> /etc/apt/sources.list.d/google.list'\napt-get update\napt-get install google-chrome-stable\n\n# Chromedriver, matching version number\n# See: https://skolo.online/documents/webscrapping/#pre-requisites\n# See step 3 at: https://tecadmin.net/setup-selenium-chromedriver-on-ubuntu/\n# See: https://sites.google.com/chromium.org/driver/\n#!# NB This will become out of sync if apt-get updates google-chrome-stable\ngoogle-chrome --version\nwget https://chromedriver.storage.googleapis.com/108.0.5359.71/chromedriver_linux64.zip\nunzip chromedriver_linux64.zip\nsudo mv chromedriver /usr/bin/chromedriver\nsudo chown root:root /usr/bin/chromedriver\nsudo chmod +x /usr/bin/chromedriver\nrm chromedriver_linux64.zip\n\n" ]
[ 0 ]
[]
[]
[ "selenium", "selenium_webdriver" ]
stackoverflow_0074661065_selenium_selenium_webdriver.txt
Q: Nx workspace, tailwind is not working in the libaries I got a problem, i configured tailwind for my angular project in an NX workspace. When I use tailwind in libaries now, it is a strange behavior. I kind of know the problem already, but i have no solution right now. This is the Tutorial I used to set up tailwind. Initially all worked fine, but now it is kind of that the scss / tailwind from the libaries is not applied. Normally it should work, in the tutorial they wrote this: In an Nx workspace, a regular library (non-buildable and non-publishable) is just a slice of an application that is only built as part of the build process of an application that consumes it. Because of that, as long as the application that consumes it has Tailwind CSS configured, the library code will be processed as expected even though the library itself doesn’t have a Tailwind CSS configuration. In fact, adding a tailwind.config.js file to the library won’t have any effect whatsoever (it’ll be ignored) because the library is never built on its own. So i don't know why it is not working. At the moment i have a bad workaround. I copy the html code from the libary in the project and after it has compiled, I delete it and the tailwind configuration gets applied to the libary. So how can i make sure that the tailwind is applied to the libaries, without copying the code. style.scss, have the 3 imports: @tailwind base; @tailwind components; @tailwind utilities; tailwind.config.js: const { join } = require('path'); module.exports = { content: [ join(__dirname, '/src/**/!(*.stories|*.spec).{ts,html}'), ...createGlobPatternsForDependencies(__dirname), ], theme: { extend: {}, }, plugins: [], }; A: createGlobPatternsForDependencies can identify your app's dependencies and return the global pattern for them.maybe you should build again
Nx workspace, tailwind is not working in the libaries
I got a problem, i configured tailwind for my angular project in an NX workspace. When I use tailwind in libaries now, it is a strange behavior. I kind of know the problem already, but i have no solution right now. This is the Tutorial I used to set up tailwind. Initially all worked fine, but now it is kind of that the scss / tailwind from the libaries is not applied. Normally it should work, in the tutorial they wrote this: In an Nx workspace, a regular library (non-buildable and non-publishable) is just a slice of an application that is only built as part of the build process of an application that consumes it. Because of that, as long as the application that consumes it has Tailwind CSS configured, the library code will be processed as expected even though the library itself doesn’t have a Tailwind CSS configuration. In fact, adding a tailwind.config.js file to the library won’t have any effect whatsoever (it’ll be ignored) because the library is never built on its own. So i don't know why it is not working. At the moment i have a bad workaround. I copy the html code from the libary in the project and after it has compiled, I delete it and the tailwind configuration gets applied to the libary. So how can i make sure that the tailwind is applied to the libaries, without copying the code. style.scss, have the 3 imports: @tailwind base; @tailwind components; @tailwind utilities; tailwind.config.js: const { join } = require('path'); module.exports = { content: [ join(__dirname, '/src/**/!(*.stories|*.spec).{ts,html}'), ...createGlobPatternsForDependencies(__dirname), ], theme: { extend: {}, }, plugins: [], };
[ "createGlobPatternsForDependencies can identify your app's dependencies and return the global pattern for them.maybe you should build again\n" ]
[ 0 ]
[]
[]
[ "angular", "nomachine_nx", "tailwind_css" ]
stackoverflow_0072360817_angular_nomachine_nx_tailwind_css.txt
Q: two-way UART communication between STM32 and ESP32 works :) But I would like to understand more what happens Right now, I am in process of learning STM32 and have some struggles with C (I am beginner level - Knowing some basics). I was trying for about 2 days to figure out how to communicate with UART and somehow with thousands forum questions and answers I made it. The problem is I do not understand some parts of code: specifically what "(uint8_t*)&x" and "&x" means. Have some idea, but I am 0% sure. Below is STM32 UART part of code (without UART Init): while (1) { /* USER CODE END WHILE */ if (__HAL_UART_GET_FLAG(&huart1, UART_FLAG_RXNE) == SET) { HAL_UART_Receive (&huart1,&x,1, HAL_MAX_DELAY); x++; for (i=0; i<x; i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(100); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(100); } HAL_Delay(1000); HAL_UART_Transmit(&huart1,(uint8_t*)&x,1,HAL_MAX_DELAY); } /* USER CODE BEGIN 3 */ } /* USER CODE END 3 */ I saw somewhere someone mention pointers. As I said I am not experienced with C enough, so It would be nice If someone can recommend me where I could look to understand it more. Thank you! Since I saw a lot of people are searching for this way of two-way communication I will put both STM32 and ESP32 codes in "What did you try and what were you expecting?" session so somebody can use it in there projects. STM32 code: /* USER CODE BEGIN Header */ /** ****************************************************************************** * @file : main.c * @brief : Main program body ****************************************************************************** * @attention * * Copyright (c) 2022 STMicroelectronics. * All rights reserved. * * This software is licensed under terms that can be found in the LICENSE file * in the root directory of this software component. * If no LICENSE file comes with this software, it is provided AS-IS. * ****************************************************************************** */ /* USER CODE END Header */ /* Includes ------------------------------------------------------------------*/ #include "main.h" /* Private includes ----------------------------------------------------------*/ /* USER CODE BEGIN Includes */ /* USER CODE END Includes */ /* Private typedef -----------------------------------------------------------*/ /* USER CODE BEGIN PTD */ /* USER CODE END PTD */ /* Private define ------------------------------------------------------------*/ /* USER CODE BEGIN PD */ /* USER CODE END PD */ /* Private macro -------------------------------------------------------------*/ /* USER CODE BEGIN PM */ /* USER CODE END PM */ /* Private variables ---------------------------------------------------------*/ UART_HandleTypeDef huart1; /* USER CODE BEGIN PV */ /* USER CODE END PV */ /* Private function prototypes -----------------------------------------------*/ void SystemClock_Config(void); static void MX_GPIO_Init(void); static void MX_USART1_UART_Init(void); /* USER CODE BEGIN PFP */ /* USER CODE END PFP */ /* Private user code ---------------------------------------------------------*/ /* USER CODE BEGIN 0 */ /* USER CODE END 0 */ /** * @brief The application entry point. * @retval int */ int main(void) { /* USER CODE BEGIN 1 */ /* USER CODE END 1 */ /* MCU Configuration--------------------------------------------------------*/ /* Reset of all peripherals, Initializes the Flash interface and the Systick. */ HAL_Init(); /* USER CODE BEGIN Init */ int i; char x = 0; //char y = 5; /* USER CODE END Init */ /* Configure the system clock */ SystemClock_Config(); /* USER CODE BEGIN SysInit */ /* USER CODE END SysInit */ /* Initialize all configured peripherals */ MX_GPIO_Init(); MX_USART1_UART_Init(); /* USER CODE BEGIN 2 */ for(i=0;i<5;i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(300); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(300); } HAL_Delay(3000); /* USER CODE END 2 */ /* Infinite loop */ /* USER CODE BEGIN WHILE */ while (1) { /* USER CODE END WHILE */ if (__HAL_UART_GET_FLAG(&huart1, UART_FLAG_RXNE) == SET) { HAL_UART_Receive (&huart1,&x,1, HAL_MAX_DELAY); x++; for (i=0; i<x; i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(100); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(100); } HAL_Delay(1000); HAL_UART_Transmit(&huart1,(uint8_t*)&x,1,HAL_MAX_DELAY); } /* USER CODE BEGIN 3 */ } /* USER CODE END 3 */ } /** * @brief System Clock Configuration * @retval None */ void SystemClock_Config(void) { RCC_OscInitTypeDef RCC_OscInitStruct = {0}; RCC_ClkInitTypeDef RCC_ClkInitStruct = {0}; /** Initializes the RCC Oscillators according to the specified parameters * in the RCC_OscInitTypeDef structure. */ RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI; RCC_OscInitStruct.HSIState = RCC_HSI_ON; RCC_OscInitStruct.HSICalibrationValue = RCC_HSICALIBRATION_DEFAULT; RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON; RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSI_DIV2; RCC_OscInitStruct.PLL.PLLMUL = RCC_PLL_MUL4; if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK) { Error_Handler(); } /** Initializes the CPU, AHB and APB buses clocks */ RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK|RCC_CLOCKTYPE_SYSCLK |RCC_CLOCKTYPE_PCLK1|RCC_CLOCKTYPE_PCLK2; RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK; RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1; RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV1; RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1; if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_0) != HAL_OK) { Error_Handler(); } } /** * @brief USART1 Initialization Function * @param None * @retval None */ static void MX_USART1_UART_Init(void) { /* USER CODE BEGIN USART1_Init 0 */ /* USER CODE END USART1_Init 0 */ /* USER CODE BEGIN USART1_Init 1 */ /* USER CODE END USART1_Init 1 */ huart1.Instance = USART1; huart1.Init.BaudRate = 9600; huart1.Init.WordLength = UART_WORDLENGTH_8B; huart1.Init.StopBits = UART_STOPBITS_1; huart1.Init.Parity = UART_PARITY_NONE; huart1.Init.Mode = UART_MODE_TX_RX; huart1.Init.HwFlowCtl = UART_HWCONTROL_NONE; huart1.Init.OverSampling = UART_OVERSAMPLING_16; if (HAL_UART_Init(&huart1) != HAL_OK) { Error_Handler(); } /* USER CODE BEGIN USART1_Init 2 */ /* USER CODE END USART1_Init 2 */ } /** * @brief GPIO Initialization Function * @param None * @retval None */ static void MX_GPIO_Init(void) { GPIO_InitTypeDef GPIO_InitStruct = {0}; /* GPIO Ports Clock Enable */ __HAL_RCC_GPIOA_CLK_ENABLE(); /*Configure GPIO pin Output Level */ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); /*Configure GPIO pin : PA1 */ GPIO_InitStruct.Pin = GPIO_PIN_1; GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); } /* USER CODE BEGIN 4 */ /* USER CODE END 4 */ /** * @brief This function is executed in case of error occurrence. * @retval None */ void Error_Handler(void) { /* USER CODE BEGIN Error_Handler_Debug */ /* User can add his own implementation to report the HAL error return state */ __disable_irq(); while (1) { } /* USER CODE END Error_Handler_Debug */ } #ifdef USE_FULL_ASSERT /** * @brief Reports the name of the source file and the source line number * where the assert_param error has occurred. * @param file: pointer to the source file name * @param line: assert_param error line source number * @retval None */ void assert_failed(uint8_t *file, uint32_t line) { /* USER CODE BEGIN 6 */ /* User can add his own implementation to report the file name and line number, ex: printf("Wrong parameters value: file %s on line %d\r\n", file, line) */ /* USER CODE END 6 */ } #endif /* USE_FULL_ASSERT */ ESP32 code (MicroPython): from machine import UART from time import sleep x = b'\x01' uart1 = UART(1, baudrate=9600, tx=19, rx=18) while 1: uart1.write(x) while uart1.any()==0: pass x = uart1.read(1) print(x) print(int.from_bytes(x, "big")) sleep(0.5) A: “&x” is taking the address of “x”. It is the equivalent of making a pointer to “x”. Once you have that pointer to “x” it is a pointer to x’s type. In this code that type is char. Presumably your function HAL_UART_Transmit is looking for a pointer to a variable of type uint8_t, so your code uses “(uint8_t*)&x” to cast the pointer to type char to a pointer of type uint8_t.
two-way UART communication between STM32 and ESP32 works :) But I would like to understand more what happens
Right now, I am in process of learning STM32 and have some struggles with C (I am beginner level - Knowing some basics). I was trying for about 2 days to figure out how to communicate with UART and somehow with thousands forum questions and answers I made it. The problem is I do not understand some parts of code: specifically what "(uint8_t*)&x" and "&x" means. Have some idea, but I am 0% sure. Below is STM32 UART part of code (without UART Init): while (1) { /* USER CODE END WHILE */ if (__HAL_UART_GET_FLAG(&huart1, UART_FLAG_RXNE) == SET) { HAL_UART_Receive (&huart1,&x,1, HAL_MAX_DELAY); x++; for (i=0; i<x; i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(100); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(100); } HAL_Delay(1000); HAL_UART_Transmit(&huart1,(uint8_t*)&x,1,HAL_MAX_DELAY); } /* USER CODE BEGIN 3 */ } /* USER CODE END 3 */ I saw somewhere someone mention pointers. As I said I am not experienced with C enough, so It would be nice If someone can recommend me where I could look to understand it more. Thank you! Since I saw a lot of people are searching for this way of two-way communication I will put both STM32 and ESP32 codes in "What did you try and what were you expecting?" session so somebody can use it in there projects. STM32 code: /* USER CODE BEGIN Header */ /** ****************************************************************************** * @file : main.c * @brief : Main program body ****************************************************************************** * @attention * * Copyright (c) 2022 STMicroelectronics. * All rights reserved. * * This software is licensed under terms that can be found in the LICENSE file * in the root directory of this software component. * If no LICENSE file comes with this software, it is provided AS-IS. * ****************************************************************************** */ /* USER CODE END Header */ /* Includes ------------------------------------------------------------------*/ #include "main.h" /* Private includes ----------------------------------------------------------*/ /* USER CODE BEGIN Includes */ /* USER CODE END Includes */ /* Private typedef -----------------------------------------------------------*/ /* USER CODE BEGIN PTD */ /* USER CODE END PTD */ /* Private define ------------------------------------------------------------*/ /* USER CODE BEGIN PD */ /* USER CODE END PD */ /* Private macro -------------------------------------------------------------*/ /* USER CODE BEGIN PM */ /* USER CODE END PM */ /* Private variables ---------------------------------------------------------*/ UART_HandleTypeDef huart1; /* USER CODE BEGIN PV */ /* USER CODE END PV */ /* Private function prototypes -----------------------------------------------*/ void SystemClock_Config(void); static void MX_GPIO_Init(void); static void MX_USART1_UART_Init(void); /* USER CODE BEGIN PFP */ /* USER CODE END PFP */ /* Private user code ---------------------------------------------------------*/ /* USER CODE BEGIN 0 */ /* USER CODE END 0 */ /** * @brief The application entry point. * @retval int */ int main(void) { /* USER CODE BEGIN 1 */ /* USER CODE END 1 */ /* MCU Configuration--------------------------------------------------------*/ /* Reset of all peripherals, Initializes the Flash interface and the Systick. */ HAL_Init(); /* USER CODE BEGIN Init */ int i; char x = 0; //char y = 5; /* USER CODE END Init */ /* Configure the system clock */ SystemClock_Config(); /* USER CODE BEGIN SysInit */ /* USER CODE END SysInit */ /* Initialize all configured peripherals */ MX_GPIO_Init(); MX_USART1_UART_Init(); /* USER CODE BEGIN 2 */ for(i=0;i<5;i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(300); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(300); } HAL_Delay(3000); /* USER CODE END 2 */ /* Infinite loop */ /* USER CODE BEGIN WHILE */ while (1) { /* USER CODE END WHILE */ if (__HAL_UART_GET_FLAG(&huart1, UART_FLAG_RXNE) == SET) { HAL_UART_Receive (&huart1,&x,1, HAL_MAX_DELAY); x++; for (i=0; i<x; i++){ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_SET); HAL_Delay(100); HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); HAL_Delay(100); } HAL_Delay(1000); HAL_UART_Transmit(&huart1,(uint8_t*)&x,1,HAL_MAX_DELAY); } /* USER CODE BEGIN 3 */ } /* USER CODE END 3 */ } /** * @brief System Clock Configuration * @retval None */ void SystemClock_Config(void) { RCC_OscInitTypeDef RCC_OscInitStruct = {0}; RCC_ClkInitTypeDef RCC_ClkInitStruct = {0}; /** Initializes the RCC Oscillators according to the specified parameters * in the RCC_OscInitTypeDef structure. */ RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI; RCC_OscInitStruct.HSIState = RCC_HSI_ON; RCC_OscInitStruct.HSICalibrationValue = RCC_HSICALIBRATION_DEFAULT; RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON; RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSI_DIV2; RCC_OscInitStruct.PLL.PLLMUL = RCC_PLL_MUL4; if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK) { Error_Handler(); } /** Initializes the CPU, AHB and APB buses clocks */ RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK|RCC_CLOCKTYPE_SYSCLK |RCC_CLOCKTYPE_PCLK1|RCC_CLOCKTYPE_PCLK2; RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK; RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1; RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV1; RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1; if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_0) != HAL_OK) { Error_Handler(); } } /** * @brief USART1 Initialization Function * @param None * @retval None */ static void MX_USART1_UART_Init(void) { /* USER CODE BEGIN USART1_Init 0 */ /* USER CODE END USART1_Init 0 */ /* USER CODE BEGIN USART1_Init 1 */ /* USER CODE END USART1_Init 1 */ huart1.Instance = USART1; huart1.Init.BaudRate = 9600; huart1.Init.WordLength = UART_WORDLENGTH_8B; huart1.Init.StopBits = UART_STOPBITS_1; huart1.Init.Parity = UART_PARITY_NONE; huart1.Init.Mode = UART_MODE_TX_RX; huart1.Init.HwFlowCtl = UART_HWCONTROL_NONE; huart1.Init.OverSampling = UART_OVERSAMPLING_16; if (HAL_UART_Init(&huart1) != HAL_OK) { Error_Handler(); } /* USER CODE BEGIN USART1_Init 2 */ /* USER CODE END USART1_Init 2 */ } /** * @brief GPIO Initialization Function * @param None * @retval None */ static void MX_GPIO_Init(void) { GPIO_InitTypeDef GPIO_InitStruct = {0}; /* GPIO Ports Clock Enable */ __HAL_RCC_GPIOA_CLK_ENABLE(); /*Configure GPIO pin Output Level */ HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET); /*Configure GPIO pin : PA1 */ GPIO_InitStruct.Pin = GPIO_PIN_1; GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); } /* USER CODE BEGIN 4 */ /* USER CODE END 4 */ /** * @brief This function is executed in case of error occurrence. * @retval None */ void Error_Handler(void) { /* USER CODE BEGIN Error_Handler_Debug */ /* User can add his own implementation to report the HAL error return state */ __disable_irq(); while (1) { } /* USER CODE END Error_Handler_Debug */ } #ifdef USE_FULL_ASSERT /** * @brief Reports the name of the source file and the source line number * where the assert_param error has occurred. * @param file: pointer to the source file name * @param line: assert_param error line source number * @retval None */ void assert_failed(uint8_t *file, uint32_t line) { /* USER CODE BEGIN 6 */ /* User can add his own implementation to report the file name and line number, ex: printf("Wrong parameters value: file %s on line %d\r\n", file, line) */ /* USER CODE END 6 */ } #endif /* USE_FULL_ASSERT */ ESP32 code (MicroPython): from machine import UART from time import sleep x = b'\x01' uart1 = UART(1, baudrate=9600, tx=19, rx=18) while 1: uart1.write(x) while uart1.any()==0: pass x = uart1.read(1) print(x) print(int.from_bytes(x, "big")) sleep(0.5)
[ "“&x” is taking the address of “x”. It is the equivalent of making a pointer to “x”.\nOnce you have that pointer to “x” it is a pointer to x’s type. In this code that type is char. Presumably your function HAL_UART_Transmit is looking for a pointer to a variable of type uint8_t, so your code uses “(uint8_t*)&x” to cast the pointer to type char to a pointer of type uint8_t.\n" ]
[ 1 ]
[]
[]
[ "c", "stm", "stm32" ]
stackoverflow_0074677676_c_stm_stm32.txt
Q: WP_Query Multiple Taxonomies Filter I have several taxonomies registered to a custom post type. I am trying to give users the option to filter posts within this custom post type via a combination of terms within these taxonomies. Problem I'm able to get the results I want if the user selects at least one term for each taxonomy, but if only one term is selected OVERALL (i.e. nothing is selected for other taxonomies), then the result is blank. Concrete Example of Problem My custom post type is "product". I have created several different taxonomies for this post type and different terms within each taxonomy, like the following: Product Types (Taxonomy 1) Postcards Art Prints Greeting Cards Themes (Taxonomy 2) Inspirational Sweet Funny Etc. If the user chooses "Postcards" and "Inspirational," then the query is able to return products that have both "Postcards" and "Inspirational" selected. However, if the user checks "Postcards" only, then the result is blank. What I want is for the query to return all posts with "Postcards" checked. Here is what I have tried: First I create the filters by outputting them as checkboxes on the page <form method="GET"> <h3>Product Types</h3> <?php $terms = get_terms([ 'taxonomy' => 'product_types', 'hide_empty' => false ]); foreach ($terms as $term) : ?> <label onchange="this.form.submit()"> <input type="checkbox" name="collections[]" value="<?php echo $term->slug; ?>" <?php checked( (isset($_GET['collections']) && in_array($term->slug, $_GET['collections'])) ) ?> /> <?php echo $term->name; ?> </label> <?php endforeach; ?> <h3>Themes</h3> <?php $terms = get_terms([ 'taxonomy' => 'themes', 'hide_empty' => false, ]); foreach ($terms as $term) : ?> <label onchange="this.form.submit()"> <input type="checkbox" name="collections[]" value="<?php echo $term->slug; ?>" <?php checked( (isset($_GET['collections']) && in_array($term->slug, $_GET['collections'])) ) ?> /> <?php echo $term->name; ?> </label> <?php endforeach; ?> </form> Then, I use tax_query to look for products that match the user's selections. <?php $collections_val = $_GET['collections']; $args = [ 'post_type' => 'product', 'posts_per_page' => -1, ]; // Append our tax-query if we have terms. Make sure it is a valid string or array $terms = $collections_val; if ( $terms ) { $args['tax_query'] = [ 'relation' => 'AND', [ 'taxonomy' => 'product_types', 'field' => 'slug', 'terms' => $terms, ], [ 'taxonomy' => 'themes', 'field' => 'slug', 'terms' => $terms, ], ]; } $product_query = new WP_Query( $args ); ?> <?php if ( $product_query->have_posts() ) : while ( $product_query->have_posts() ): $product_query->the_post(); echo the_title(); endwhile; else : echo 'No results'; endif; ?> I know this is what my code is telling WP to do b/c I'm using 'relation' => 'AND'. But for my purposes I need WP to return results even if only one term under one taxonomy is selected. If anyone can point me in the right direction, I would really appreciate it. Did a lot of research already but couldn't find my way. Thanks! A: Figured it out myself. The key was that I needed to use 'OR' for the relation if the user only ticks checkbox(es) within one taxonomy and 'AND' for all the other scenarios. Got the idea from this answer. If you have more than 2 taxonomies, though, things will get a little more complicated as you have to account for ALL combinations of relationships. You'll also need to query only the taxonomies with ticked checkboxes. For example, you need assign a different relation value depending on what's been selected: Tax 1 + Tax 2 relation should be ('AND') only query tax 1 & tax 2 Tax 1 + Tax 3 relation should be ('AND') only query tax 1 & tax 3 Tax 2 + Tax 3 relation should be ('AND') only query tax 2 & tax 3 Tax 1 + Tax 2 + Tax 3 relation should be ('AND') query all 3 taxonomies Tax 1 only or Tax 2 only or Tax 3 only relation should be ('OR') query the taxonomy with checked checkboxes only (or just query all 3 since it's an 'OR' relationship. It won't affect the front-end result. It might affect performance if you have a lot of posts, obviously.) I can post my final code if anyone wants it.
WP_Query Multiple Taxonomies Filter
I have several taxonomies registered to a custom post type. I am trying to give users the option to filter posts within this custom post type via a combination of terms within these taxonomies. Problem I'm able to get the results I want if the user selects at least one term for each taxonomy, but if only one term is selected OVERALL (i.e. nothing is selected for other taxonomies), then the result is blank. Concrete Example of Problem My custom post type is "product". I have created several different taxonomies for this post type and different terms within each taxonomy, like the following: Product Types (Taxonomy 1) Postcards Art Prints Greeting Cards Themes (Taxonomy 2) Inspirational Sweet Funny Etc. If the user chooses "Postcards" and "Inspirational," then the query is able to return products that have both "Postcards" and "Inspirational" selected. However, if the user checks "Postcards" only, then the result is blank. What I want is for the query to return all posts with "Postcards" checked. Here is what I have tried: First I create the filters by outputting them as checkboxes on the page <form method="GET"> <h3>Product Types</h3> <?php $terms = get_terms([ 'taxonomy' => 'product_types', 'hide_empty' => false ]); foreach ($terms as $term) : ?> <label onchange="this.form.submit()"> <input type="checkbox" name="collections[]" value="<?php echo $term->slug; ?>" <?php checked( (isset($_GET['collections']) && in_array($term->slug, $_GET['collections'])) ) ?> /> <?php echo $term->name; ?> </label> <?php endforeach; ?> <h3>Themes</h3> <?php $terms = get_terms([ 'taxonomy' => 'themes', 'hide_empty' => false, ]); foreach ($terms as $term) : ?> <label onchange="this.form.submit()"> <input type="checkbox" name="collections[]" value="<?php echo $term->slug; ?>" <?php checked( (isset($_GET['collections']) && in_array($term->slug, $_GET['collections'])) ) ?> /> <?php echo $term->name; ?> </label> <?php endforeach; ?> </form> Then, I use tax_query to look for products that match the user's selections. <?php $collections_val = $_GET['collections']; $args = [ 'post_type' => 'product', 'posts_per_page' => -1, ]; // Append our tax-query if we have terms. Make sure it is a valid string or array $terms = $collections_val; if ( $terms ) { $args['tax_query'] = [ 'relation' => 'AND', [ 'taxonomy' => 'product_types', 'field' => 'slug', 'terms' => $terms, ], [ 'taxonomy' => 'themes', 'field' => 'slug', 'terms' => $terms, ], ]; } $product_query = new WP_Query( $args ); ?> <?php if ( $product_query->have_posts() ) : while ( $product_query->have_posts() ): $product_query->the_post(); echo the_title(); endwhile; else : echo 'No results'; endif; ?> I know this is what my code is telling WP to do b/c I'm using 'relation' => 'AND'. But for my purposes I need WP to return results even if only one term under one taxonomy is selected. If anyone can point me in the right direction, I would really appreciate it. Did a lot of research already but couldn't find my way. Thanks!
[ "Figured it out myself. The key was that I needed to use 'OR' for the relation if the user only ticks checkbox(es) within one taxonomy and 'AND' for all the other scenarios.\nGot the idea from this answer.\nIf you have more than 2 taxonomies, though, things will get a little more complicated as you have to account for ALL combinations of relationships. You'll also need to query only the taxonomies with ticked checkboxes.\nFor example, you need assign a different relation value depending on what's been selected:\nTax 1 + Tax 2\nrelation should be ('AND')\nonly query tax 1 & tax 2\nTax 1 + Tax 3\nrelation should be ('AND')\nonly query tax 1 & tax 3\nTax 2 + Tax 3\nrelation should be ('AND')\nonly query tax 2 & tax 3\nTax 1 + Tax 2 + Tax 3\nrelation should be ('AND')\nquery all 3 taxonomies\nTax 1 only or Tax 2 only or Tax 3 only\nrelation should be ('OR')\nquery the taxonomy with checked checkboxes only (or just query all 3 since it's an 'OR' relationship. It won't affect the front-end result. It might affect performance if you have a lot of posts, obviously.)\nI can post my final code if anyone wants it.\n" ]
[ 0 ]
[]
[]
[ "filtering", "wordpress" ]
stackoverflow_0074671613_filtering_wordpress.txt
Q: Some .so-files missing in merged_native_libs directory when compiling Android app When I compile my Android project in Android Studio, some (but not all) .so-files are missing in the merged_native_libs directory. Only .so-files that are a dependency of at least one .so-file are affected. Can somebody help? A: You can try to check your app's build.gradle file and make sure that the missing .so files are listed in the defaultConfig section under ndk.abiFilters. This will ensure that the .so files are included in the build process. You can also try cleaning and rebuilding the project to see if that resolves the issue. To clean the project, go to Build > Clean Project in the menu bar. To rebuild the project, go to Build > Rebuild Project in the menu bar. If the issue persists, it is possible that there is a problem with the dependencies of the .so files. You can try to check the dependencies of the .so files and make sure that they are all correctly included in the project. If you are still unable to resolve the issue, it may be helpful to provide more information about your project and the specific .so files that are missing. This will allow others to provide more specific suggestions. Sorry, but can't say more without more specific description
Some .so-files missing in merged_native_libs directory when compiling Android app
When I compile my Android project in Android Studio, some (but not all) .so-files are missing in the merged_native_libs directory. Only .so-files that are a dependency of at least one .so-file are affected. Can somebody help?
[ "You can try to check your app's build.gradle file and make sure that the missing .so files are listed in the defaultConfig section under ndk.abiFilters. This will ensure that the .so files are included in the build process. You can also try cleaning and rebuilding the project to see if that resolves the issue. To clean the project, go to Build > Clean Project in the menu bar. To rebuild the project, go to Build > Rebuild Project in the menu bar.\nIf the issue persists, it is possible that there is a problem with the dependencies of the .so files. You can try to check the dependencies of the .so files and make sure that they are all correctly included in the project.\nIf you are still unable to resolve the issue, it may be helpful to provide more information about your project and the specific .so files that are missing. This will allow others to provide more specific suggestions.\nSorry, but can't say more without more specific description\n" ]
[ 0 ]
[]
[]
[ "android", "android_gradle_plugin", "cmake", "gradle" ]
stackoverflow_0074677482_android_android_gradle_plugin_cmake_gradle.txt
Q: Change background with sliding transition depending on button hovered? I've been trying to redesign my website recently, and I thought the idea to change the main header to change into different backgrounds depending on the button you hover would be cool However, I know nothing about javascript besides from the absolute basic, so some help would be nice Here's what I'm trying to to achieve Here's the current HTML for the header <body> <div class="logocontainer"> <a href="index.html"> <img src="images/badasslogo.png" class="logo"></a> </div> <div id="buttoncontainer" class="buttoncontainer"> </div> <script src="js/menu.js"></script> Here's the CSS .logocontainer { text-align: center; } .logo { display: inline-block; margin-bottom: 0.30%; align: center; } .buttoncontainer { text-align: center; } .button { display: inline-block; } .button:hover { box-shadow: 0 0 5px white; filter: brightness(125%); } .button:active { box-shadow: 0 0 10px white; filter: brightness(155%); } And the .js file which I use for the buttons, since if I didn't use it, I would have to update every single page manually if I ever wanted to add more buttons to it let headerContent = ` <a href="index.html"> <img src="images/buttons/homebutton.png" class="button"></a> <a href="blog/blogmain.html"> <img src="images/buttons/blogbutton.png" class="button"></a> <a href="art/artmain.html"> <img src="images/buttons/artbutton.png" class="button"></a> <a href="fanart/fanartmain.html"> <img src="images/buttons/fanartbutton.png" class="button"></a> <a href="partners/partnersmain.html"> <img src="images/buttons/partnersbutton.png" class="button"></a> <a href="guestbook/guestbook.html"> <img src="images/buttons/guestbookbutton.png" class="button"></a> <a href="https://junessaidotnet.proboards.com/"> <img src="images/buttons/forumsbutton.png" class="button"></a> <a href="downloads/downloadsmain.html"> <img src="images/buttons/downloadsbutton.png" class="button"></a> <a href="extras/extrasmain.html"> <img src="images/buttons/extrasbutton.png" class="button"></a> <a href="donate/donatemain.html"> <img src="images/buttons/donatebutton.png" class="button"></a> <a href="about/about.html"> <img src="images/buttons/aboutbutton.png" class="button"></a> `; document.querySelector('#buttoncontainer').insertAdjacentHTML('beforeend', headerContent); Also, if possible Is there any way to insert the logo into .js file aswell? A: A JS solution would probably involve having event listeners for mouseenter and mouseleave and toggling a class on some element based on that, but since you said you're not that great at JS, here is a CSS-only approach you can take instead. .logocontainer { width: 100%; height: 200px; background-color: black; } :has(button:first-child:hover) .logocontainer { background-color: red; } :has(button:nth-child(2):hover) .logocontainer { background-color: gold; } <div class="logocontainer"></div> <div id="buttoncontainer" class="buttoncontainer"> <button>button 1</button> <button>button2</button> </div> A: Please take a look at the JavaScript comments. I included a simple transition. The approach is to position:absolute; the banner images. Then, on button hover, create a new image element and insert it behind current image. After a successful promise, remove previous image and trigger CSS animation: // Here you set the img src of banners. Be sure to have as many as the menu buttons const banners = [ 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80', 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80', 'https://images.unsplash.com/photo-1550745165-9bc0b252726f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1550751827-4bd374c3f58b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1597733336794-12d05021d510?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1074&q=80', 'https://images.unsplash.com/photo-1550751827-4bd374c3f58b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1550745165-9bc0b252726f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1597733336794-12d05021d510?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1074&q=80', 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80', 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80', ]; let headerContent = ` <a href="index.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="blog/blogmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="art/artmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="fanart/fanartmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="partners/partnersmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="guestbook/guestbook.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="https://junessaidotnet.proboards.com/"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="downloads/downloadsmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="extras/extrasmain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="donate/donatemain.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> <a href="about/about.html"> <img src="https://via.placeholder.com/50x50" class="button"></a> `; document.querySelector('#buttoncontainer').insertAdjacentHTML('beforeend', headerContent); // Grab menu buttons and overlay span const menuButtons = document.querySelectorAll('#buttoncontainer a'); const overlay = document.querySelector('.overlay'); // For each menu button, attach an EventListener menuButtons.forEach((btn, i) => { btn.addEventListener('mouseenter', () => { // Remove animate class from overlay making it go back to left position overlay.classList.remove('animate'); // Get current Image const currentImage = document.querySelector('.logocontainer a img'); // Load Image function loadImage(currentImage, i) }); }); // Async Function to load new image const loadImage = async(currentImage, i) => { // Create a new img element, set the img src from array, and the class of 'logo' const img = document.createElement("img"); img.src = await banners[i]; img.classList.add('logo'); // Wait for it to load await new Promise((res) => { img.onload = res; }); // Insert new Image before current Image document.querySelector('.logocontainer a').insertBefore(img, currentImage); // Add Swipe transition and remove current image after half of transition's duration overlay.classList.add('animate'); setTimeout(() => currentImage.remove(), 500); } .logocontainer { text-align: center; } .logocontainer a { max-width: 550px; height: 100px; margin: 0 auto; display: flex; position: relative; overflow: hidden; } .logo { position: absolute; left: 0; top: 0; margin-bottom: 0.30%; width: 100%; object-fit: cover; height: auto; } .buttoncontainer { text-align: center; } .button { display: inline-block; } .button:hover { box-shadow: 0 0 5px white; filter: brightness(125%); } .button:active { box-shadow: 0 0 10px white; } .overlay { position: absolute; left: -110%; top: 0; height: 100%; width: 110%; background: #151515; z-index: 2; } .animate { animation: 1s swipe forwards linear; } @keyframes swipe { 0% { left: -110%; } 100% { left: 110%; } } <div class="logocontainer"> <a href="index.html"> <span class="overlay"></span> <img src="https://via.placeholder.com/550x100" class="logo" /> </a> </div> <div id="buttoncontainer" class="buttoncontainer"> </div>
Change background with sliding transition depending on button hovered?
I've been trying to redesign my website recently, and I thought the idea to change the main header to change into different backgrounds depending on the button you hover would be cool However, I know nothing about javascript besides from the absolute basic, so some help would be nice Here's what I'm trying to to achieve Here's the current HTML for the header <body> <div class="logocontainer"> <a href="index.html"> <img src="images/badasslogo.png" class="logo"></a> </div> <div id="buttoncontainer" class="buttoncontainer"> </div> <script src="js/menu.js"></script> Here's the CSS .logocontainer { text-align: center; } .logo { display: inline-block; margin-bottom: 0.30%; align: center; } .buttoncontainer { text-align: center; } .button { display: inline-block; } .button:hover { box-shadow: 0 0 5px white; filter: brightness(125%); } .button:active { box-shadow: 0 0 10px white; filter: brightness(155%); } And the .js file which I use for the buttons, since if I didn't use it, I would have to update every single page manually if I ever wanted to add more buttons to it let headerContent = ` <a href="index.html"> <img src="images/buttons/homebutton.png" class="button"></a> <a href="blog/blogmain.html"> <img src="images/buttons/blogbutton.png" class="button"></a> <a href="art/artmain.html"> <img src="images/buttons/artbutton.png" class="button"></a> <a href="fanart/fanartmain.html"> <img src="images/buttons/fanartbutton.png" class="button"></a> <a href="partners/partnersmain.html"> <img src="images/buttons/partnersbutton.png" class="button"></a> <a href="guestbook/guestbook.html"> <img src="images/buttons/guestbookbutton.png" class="button"></a> <a href="https://junessaidotnet.proboards.com/"> <img src="images/buttons/forumsbutton.png" class="button"></a> <a href="downloads/downloadsmain.html"> <img src="images/buttons/downloadsbutton.png" class="button"></a> <a href="extras/extrasmain.html"> <img src="images/buttons/extrasbutton.png" class="button"></a> <a href="donate/donatemain.html"> <img src="images/buttons/donatebutton.png" class="button"></a> <a href="about/about.html"> <img src="images/buttons/aboutbutton.png" class="button"></a> `; document.querySelector('#buttoncontainer').insertAdjacentHTML('beforeend', headerContent); Also, if possible Is there any way to insert the logo into .js file aswell?
[ "A JS solution would probably involve having event listeners for mouseenter and mouseleave and toggling a class on some element based on that, but since you said you're not that great at JS, here is a CSS-only approach you can take instead.\n\n\n.logocontainer {\n width: 100%;\n height: 200px;\n background-color: black;\n}\n\n:has(button:first-child:hover) .logocontainer {\n background-color: red;\n}\n\n:has(button:nth-child(2):hover) .logocontainer {\n background-color: gold;\n}\n<div class=\"logocontainer\"></div>\n\n<div id=\"buttoncontainer\" class=\"buttoncontainer\">\n <button>button 1</button>\n <button>button2</button>\n</div>\n\n\n\n", "Please take a look at the JavaScript comments. I included a simple transition.\nThe approach is to position:absolute; the banner images. Then, on button hover, create a new image element and insert it behind current image. After a successful promise, remove previous image and trigger CSS animation:\n\n\n// Here you set the img src of banners. Be sure to have as many as the menu buttons\nconst banners = [\n 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80',\n 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80',\n 'https://images.unsplash.com/photo-1550745165-9bc0b252726f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1550751827-4bd374c3f58b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1597733336794-12d05021d510?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1074&q=80',\n 'https://images.unsplash.com/photo-1550751827-4bd374c3f58b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1550745165-9bc0b252726f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1597733336794-12d05021d510?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1074&q=80',\n 'https://images.unsplash.com/photo-1488590528505-98d2b5aba04b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80',\n 'https://images.unsplash.com/photo-1451187580459-43490279c0fa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1172&q=80',\n];\n\nlet headerContent = `\n <a href=\"index.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"blog/blogmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"art/artmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"fanart/fanartmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"partners/partnersmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"guestbook/guestbook.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"https://junessaidotnet.proboards.com/\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"downloads/downloadsmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"extras/extrasmain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"donate/donatemain.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n <a href=\"about/about.html\">\n <img src=\"https://via.placeholder.com/50x50\" class=\"button\"></a>\n`;\ndocument.querySelector('#buttoncontainer').insertAdjacentHTML('beforeend', headerContent);\n\n// Grab menu buttons and overlay span\nconst menuButtons = document.querySelectorAll('#buttoncontainer a');\nconst overlay = document.querySelector('.overlay');\n\n// For each menu button, attach an EventListener\nmenuButtons.forEach((btn, i) => {\n btn.addEventListener('mouseenter', () => {\n // Remove animate class from overlay making it go back to left position\n overlay.classList.remove('animate');\n\n // Get current Image\n const currentImage = document.querySelector('.logocontainer a img');\n // Load Image function\n loadImage(currentImage, i)\n\n });\n});\n\n// Async Function to load new image\nconst loadImage = async(currentImage, i) => {\n // Create a new img element, set the img src from array, and the class of 'logo'\n const img = document.createElement(\"img\");\n img.src = await banners[i];\n img.classList.add('logo');\n\n // Wait for it to load\n await new Promise((res) => {\n img.onload = res;\n });\n\n // Insert new Image before current Image\n document.querySelector('.logocontainer a').insertBefore(img, currentImage);\n\n // Add Swipe transition and remove current image after half of transition's duration\n overlay.classList.add('animate');\n setTimeout(() => currentImage.remove(), 500);\n}\n.logocontainer {\n text-align: center;\n}\n\n.logocontainer a {\n max-width: 550px;\n height: 100px;\n margin: 0 auto;\n display: flex;\n position: relative;\n overflow: hidden;\n}\n\n.logo {\n position: absolute;\n left: 0;\n top: 0;\n margin-bottom: 0.30%;\n width: 100%;\n object-fit: cover;\n height: auto;\n}\n\n.buttoncontainer {\n text-align: center;\n}\n\n.button {\n display: inline-block;\n}\n\n.button:hover {\n box-shadow: 0 0 5px white;\n filter: brightness(125%);\n}\n\n.button:active {\n box-shadow: 0 0 10px white;\n}\n\n.overlay {\n position: absolute;\n left: -110%;\n top: 0;\n height: 100%;\n width: 110%;\n background: #151515;\n z-index: 2;\n}\n\n.animate {\n animation: 1s swipe forwards linear;\n}\n\n@keyframes swipe {\n 0% {\n left: -110%;\n }\n 100% {\n left: 110%;\n }\n}\n<div class=\"logocontainer\">\n <a href=\"index.html\">\n <span class=\"overlay\"></span>\n <img src=\"https://via.placeholder.com/550x100\" class=\"logo\" />\n </a>\n</div>\n\n<div id=\"buttoncontainer\" class=\"buttoncontainer\">\n</div>\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074677200_css_html_javascript.txt
Q: How to convert object to specific class with using type information I have a interface. That interface name is IQueue. Also I have concrete classes. Their names are MyMessage1 and MyMessage2. public interface IQueue { } public class MyMessage1 : IQueue { public string Message { get; set; } public DateTime PublishedDate { get; set; } } public class MyMessage2 : IQueue { public string Name { get; set; } } I am getting all the concrete classes implemented from IQueue with reflection and create a instance. var types = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(p => typeof(IQueue).IsAssignableFrom(p) && p.IsClass) .ToList); foreach(var type in types) { var instance = Activator.CreateInstance(type); } Instance is an object. How I can cast to specific class without using below code? Is it possible. (MyMessage1)Activator.CreateInstance(type) (MyMessage2)Activator.CreateInstance(type) I want to create a specific class instance using type information A: Cast to the type IQueue since all the types you want to instantiate implement that interface. IQueue instance = (IQueue)Activator.CreateInstance(type); Add methods and properties you want to use to that interface. The concrete classes will then have to implement these methods and properties and you can access them via the interface. A: Copy below code snippet and paste inside your class internal static void Register(Type interfaceType) { var interfaceTypes = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(t => interfaceType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract) .Select(t => new { parentInterface = t.GetInterfaces().FirstOrDefault(), Implementation = t }) .Where(t => t.parentInterface is not null && interfaceType.IsAssignableFrom(t.parentInterface)); foreach (var type in interfaceTypes) { var instance = Activator.CreateInstance(type.Implementation); } } Use like AddServices(typeof(IQueue));
How to convert object to specific class with using type information
I have a interface. That interface name is IQueue. Also I have concrete classes. Their names are MyMessage1 and MyMessage2. public interface IQueue { } public class MyMessage1 : IQueue { public string Message { get; set; } public DateTime PublishedDate { get; set; } } public class MyMessage2 : IQueue { public string Name { get; set; } } I am getting all the concrete classes implemented from IQueue with reflection and create a instance. var types = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(p => typeof(IQueue).IsAssignableFrom(p) && p.IsClass) .ToList); foreach(var type in types) { var instance = Activator.CreateInstance(type); } Instance is an object. How I can cast to specific class without using below code? Is it possible. (MyMessage1)Activator.CreateInstance(type) (MyMessage2)Activator.CreateInstance(type) I want to create a specific class instance using type information
[ "Cast to the type IQueue since all the types you want to instantiate implement that interface.\nIQueue instance = (IQueue)Activator.CreateInstance(type);\n\nAdd methods and properties you want to use to that interface. The concrete classes will then have to implement these methods and properties and you can access them via the interface.\n", "Copy below code snippet and paste inside your class\ninternal static void Register(Type interfaceType)\n {\n var interfaceTypes =\n AppDomain.CurrentDomain.GetAssemblies()\n .SelectMany(s => s.GetTypes())\n .Where(t => interfaceType.IsAssignableFrom(t)\n && t.IsClass && !t.IsAbstract)\n .Select(t => new\n {\n parentInterface = t.GetInterfaces().FirstOrDefault(),\n Implementation = t\n })\n .Where(t => t.parentInterface is not null\n && interfaceType.IsAssignableFrom(t.parentInterface));\n\n foreach (var type in interfaceTypes)\n {\n var instance = Activator.CreateInstance(type.Implementation);\n\n }\n }\n\nUse like AddServices(typeof(IQueue));\n" ]
[ 0, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0074677861_.net_c#.txt
Q: Flask-admin not recognising current_user in model View I created my ModelView in flask-admin and want to give role choices to user such that only admin can create user with role manager,admin or user. And user shouldn't have choice to give admin privilidges or as such. I am trying this code but it's giving me: AttributeError: 'NoneType' object has no attribute 'is_authenticated' class UserView(ModelView): column_exclude_list = ['logs', 'password_hash',] form_excluded_columns = ['logs'] can_edit = True if login.current_user or current_user.is_authenticated: if login.current_user.role == 'a': form_choices = { 'role': [ ('a', 'Admin'), ('m', 'Manager'), ('u', 'User') ] } if login.current_user.role == 'm': form_choices = { 'role': [ ('m', 'Manager'), ('u', 'User') ] } Any help would be highly appreciated. A: Evaluating current_user always returns a NoneType unless it is within a Flask application context. Do something like the following: def get_user_roles(): _roles = [] if current_user and current_user.is_authenticated: if current_user.role == 'a': _roles = [('a', 'Admin'), ('m', 'Manager'), ('u', 'User')] elif login.current_user.role == 'm': _roles = [('m', 'Manager'), ('u', 'User')] return _roles class UserView(ModelView): form_choices = { 'role' : get_user_roles }
Flask-admin not recognising current_user in model View
I created my ModelView in flask-admin and want to give role choices to user such that only admin can create user with role manager,admin or user. And user shouldn't have choice to give admin privilidges or as such. I am trying this code but it's giving me: AttributeError: 'NoneType' object has no attribute 'is_authenticated' class UserView(ModelView): column_exclude_list = ['logs', 'password_hash',] form_excluded_columns = ['logs'] can_edit = True if login.current_user or current_user.is_authenticated: if login.current_user.role == 'a': form_choices = { 'role': [ ('a', 'Admin'), ('m', 'Manager'), ('u', 'User') ] } if login.current_user.role == 'm': form_choices = { 'role': [ ('m', 'Manager'), ('u', 'User') ] } Any help would be highly appreciated.
[ "Evaluating current_user always returns a NoneType unless it is within a Flask application context. Do something like the following:\ndef get_user_roles():\n\n _roles = []\n \n if current_user and current_user.is_authenticated:\n if current_user.role == 'a':\n _roles = [('a', 'Admin'), ('m', 'Manager'), ('u', 'User')]\n elif login.current_user.role == 'm':\n _roles = [('m', 'Manager'), ('u', 'User')]\n\n return _roles\n\nclass UserView(ModelView):\n\n form_choices = {\n 'role' : get_user_roles\n }\n\n" ]
[ 0 ]
[]
[]
[ "flask", "flask_admin", "flask_login", "flask_sqlalchemy", "flask_wtforms" ]
stackoverflow_0074654326_flask_flask_admin_flask_login_flask_sqlalchemy_flask_wtforms.txt
Q: Returning Value of property in Javascript Below is a simple Object I created in Javascript. var obb = {name : "Charlie", Age : 28 , Location : "London" , Job : "Detective"} ; var x = "name"; console.log(obb.name); console.log(obb.x) ; //Dot notation :- returns value undefined console.log(obb[x]); // Square bracket :- returns correct answer I know that there are two methods to fetch values of objects i.e dot notation and square bracket.Now if I am storing value of property in a variable & using dot notation with the variable to fetch value , why is it not working ? A: In JavaScript, the object attributes can be accessed using either the dot notation or the bracket notation. Dot notation is frequently used because it is simpler to read and understand. What is the significance of bracket notation and why should we use it? The square bracket syntax [] turns the expression within to a string, allowing us to access object properties via variables.
Returning Value of property in Javascript
Below is a simple Object I created in Javascript. var obb = {name : "Charlie", Age : 28 , Location : "London" , Job : "Detective"} ; var x = "name"; console.log(obb.name); console.log(obb.x) ; //Dot notation :- returns value undefined console.log(obb[x]); // Square bracket :- returns correct answer I know that there are two methods to fetch values of objects i.e dot notation and square bracket.Now if I am storing value of property in a variable & using dot notation with the variable to fetch value , why is it not working ?
[ "In JavaScript, the object attributes can be accessed using either the dot notation or the bracket notation. Dot notation is frequently used because it is simpler to read and understand. What is the significance of bracket notation and why should we use it? The square bracket syntax [] turns the expression within to a string, allowing us to access object properties via variables.\n" ]
[ 1 ]
[]
[]
[ "javascript", "object", "variables" ]
stackoverflow_0074677938_javascript_object_variables.txt
Q: "%*c%d" for ignoring letters also ignores first number in .txt file I am loading numbers from a text file into an array numbers[i]. while (fscanf(file, "%*c%d", &numbers[i]) != EOF) if (numbers[i] > 0) { i++; numbers[i] = '\0'; } I am using %*c to ignore letters. everything works fine till I put %*c%d instead of %d, now its still working but first number of that file is being ignored... I only put one line of code because the whole code is too big and I am sure %*c is the problem I just want that first number not being ignored while using "%*c%d" :) A: Your approach is not reliable because %*c reads and discards any character, not just letters, hence will consume the first digit of the number if no other characters precede it. Furthermore, comparing the return value of EOF only detects end of file before the first conversion. To detect an invalid conversion for the %d specifier, you should compare the return value to 1. Here is a modified version of your parsing loop: int numbers[100]; size_t array_length = sizeof(numbers) / sizeof(*numbers); size_t i = 0; while (i < array_length) { // skip any characters not starting a number if (fscanf(file, "%*[^-+0-9]") == EOF) { // stop at end of file break; } // try and convert an integer if (fscanf(file, "%d", &numbers[i]) == 1) { if (numbers[i] > 0) { i++; } } else { // cannot convert an integer: probably just a lone + or - sign // just skip the character. if (getc(file) == EOF) break; } } Note however that this loop will just skip anything that is not an integer. You should probably use a stricter approach, specifying an input syntax and reporting conversion errors. For a more operation, you could read the input one line at a time, convert the number and report any conversion errors: int input[100]; int numbers[100]; size_t array_length = sizeof(numbers) / sizeof(*numbers); size_t i = 0; while (i < array_length && fgets(input, sizeof input, file)) { // try and convert an integer if (sscanf(input, "%d", &numbers[i]) == 1) { if (numbers[i] > 0) { i++; } } else { printf("invalid input line: %s", input); } }
"%*c%d" for ignoring letters also ignores first number in .txt file
I am loading numbers from a text file into an array numbers[i]. while (fscanf(file, "%*c%d", &numbers[i]) != EOF) if (numbers[i] > 0) { i++; numbers[i] = '\0'; } I am using %*c to ignore letters. everything works fine till I put %*c%d instead of %d, now its still working but first number of that file is being ignored... I only put one line of code because the whole code is too big and I am sure %*c is the problem I just want that first number not being ignored while using "%*c%d" :)
[ "Your approach is not reliable because %*c reads and discards any character, not just letters, hence will consume the first digit of the number if no other characters precede it.\nFurthermore, comparing the return value of EOF only detects end of file before the first conversion. To detect an invalid conversion for the %d specifier, you should compare the return value to 1.\nHere is a modified version of your parsing loop:\n int numbers[100];\n size_t array_length = sizeof(numbers) / sizeof(*numbers);\n size_t i = 0;\n\n while (i < array_length) {\n // skip any characters not starting a number\n if (fscanf(file, \"%*[^-+0-9]\") == EOF) {\n // stop at end of file\n break;\n }\n // try and convert an integer\n if (fscanf(file, \"%d\", &numbers[i]) == 1) {\n if (numbers[i] > 0) {\n i++;\n }\n } else {\n // cannot convert an integer: probably just a lone + or - sign\n // just skip the character.\n if (getc(file) == EOF)\n break;\n }\n }\n\nNote however that this loop will just skip anything that is not an integer. You should probably use a stricter approach, specifying an input syntax and reporting conversion errors.\nFor a more operation, you could read the input one line at a time, convert the number and report any conversion errors:\n int input[100];\n int numbers[100];\n size_t array_length = sizeof(numbers) / sizeof(*numbers);\n size_t i = 0;\n\n while (i < array_length && fgets(input, sizeof input, file)) {\n // try and convert an integer\n if (sscanf(input, \"%d\", &numbers[i]) == 1) {\n if (numbers[i] > 0) {\n i++;\n }\n } else {\n printf(\"invalid input line: %s\", input);\n }\n }\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c", "scanf" ]
stackoverflow_0074677507_arrays_c_scanf.txt
Q: Ionic Android gradle error: General error during conversion: Unsupported class file major version 63 I am creating my app using Ionic 6 and Capacitor 4. While building it got Android, i got following error. I am building it for sdk version 23 and its installed in my android studio as well. Please let me know you need more details. [capacitor] × Running Gradle build - failed! [capacitor] [error] Starting a Gradle Daemon, 1 incompatible and 5 stopped Daemons could not be reused, use --status for details [capacitor] [capacitor] FAILURE: Build failed with an exception. [capacitor] [capacitor] * Where: [capacitor] Build file 'myapp\android\build.gradle' [capacitor] [capacitor] * What went wrong: [capacitor] Could not compile build file 'myapp\green-menu\android\build.gradle'. [capacitor] > startup failed: [capacitor] General error during conversion: Unsupported class file major version 63 [capacitor] [capacitor] java.lang.IllegalArgumentException: Unsupported class file major version 63 [capacitor] at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:199) [capacitor] at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:180) ... org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:65) [capacitor] at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:39) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:29) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:35) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:78) [capacitor] at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:75) ... java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) [capacitor] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) [capacitor] at java.base/java.lang.Thread.run(Thread.java:1589) [capacitor] [capacitor] 1 error [capacitor] [capacitor] [capacitor] * Try: [capacitor] > Run with --stacktrace option to get the stack trace. [capacitor] > Run with --info or --debug option to get more log output. [capacitor] > Run with --scan to get full insights. [capacitor] [capacitor] * Get more help at https://help.gradle.org [capacitor] [capacitor] BUILD FAILED in 9s [capacitor] [ERROR] An error occurred while running subprocess capacitor. capacitor.cmd run android --target 154136c40806 exited with exit code 1. Re-running this command with the --verbose flag may provide more information. A: This error message indicates that there is a problem with the version of your Java Development Kit (JDK). The error message specifically mentions that it is having difficulty with a class file with a major version 63, which indicates that you are using a JDK version 11 or later. To fix this issue, you will need to ensure that you are using a supported JDK version. You can check which version of the JDK you are currently using by running the following command in your terminal: java -version If the output indicates that you are using JDK version 11 or later, you will need to switch to a supported version. To do this, you will need to download and install a supported JDK version, such as JDK 8, and then set your JAVA_HOME environment variable to point to the directory where you installed the JDK. Once you have done this, try building your Android project again and see if the error persists. If you continue to have problems, you may need to clean and rebuild your project in order to ensure that all of the necessary files are recompiled using the correct JDK version.
Ionic Android gradle error: General error during conversion: Unsupported class file major version 63
I am creating my app using Ionic 6 and Capacitor 4. While building it got Android, i got following error. I am building it for sdk version 23 and its installed in my android studio as well. Please let me know you need more details. [capacitor] × Running Gradle build - failed! [capacitor] [error] Starting a Gradle Daemon, 1 incompatible and 5 stopped Daemons could not be reused, use --status for details [capacitor] [capacitor] FAILURE: Build failed with an exception. [capacitor] [capacitor] * Where: [capacitor] Build file 'myapp\android\build.gradle' [capacitor] [capacitor] * What went wrong: [capacitor] Could not compile build file 'myapp\green-menu\android\build.gradle'. [capacitor] > startup failed: [capacitor] General error during conversion: Unsupported class file major version 63 [capacitor] [capacitor] java.lang.IllegalArgumentException: Unsupported class file major version 63 [capacitor] at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:199) [capacitor] at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:180) ... org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:65) [capacitor] at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:39) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:29) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:35) [capacitor] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104) [capacitor] at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:78) [capacitor] at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:75) ... java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) [capacitor] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) [capacitor] at java.base/java.lang.Thread.run(Thread.java:1589) [capacitor] [capacitor] 1 error [capacitor] [capacitor] [capacitor] * Try: [capacitor] > Run with --stacktrace option to get the stack trace. [capacitor] > Run with --info or --debug option to get more log output. [capacitor] > Run with --scan to get full insights. [capacitor] [capacitor] * Get more help at https://help.gradle.org [capacitor] [capacitor] BUILD FAILED in 9s [capacitor] [ERROR] An error occurred while running subprocess capacitor. capacitor.cmd run android --target 154136c40806 exited with exit code 1. Re-running this command with the --verbose flag may provide more information.
[ "This error message indicates that there is a problem with the version of your Java Development Kit (JDK). The error message specifically mentions that it is having difficulty with a class file with a major version 63, which indicates that you are using a JDK version 11 or later.\nTo fix this issue, you will need to ensure that you are using a supported JDK version. You can check which version of the JDK you are currently using by running the following command in your terminal:\njava -version\n\nIf the output indicates that you are using JDK version 11 or later, you will need to switch to a supported version. To do this, you will need to download and install a supported JDK version, such as JDK 8, and then set your JAVA_HOME environment variable to point to the directory where you installed the JDK.\nOnce you have done this, try building your Android project again and see if the error persists. If you continue to have problems, you may need to clean and rebuild your project in order to ensure that all of the necessary files are recompiled using the correct JDK version.\n" ]
[ 0 ]
[]
[]
[ "android", "gradle", "ionic_framework" ]
stackoverflow_0074677271_android_gradle_ionic_framework.txt
Q: updating options in optionmenu Tkinter I am currently writing on a small hobby projekt and i have a problem concerning my list "dice" while using the dropdown menu it only ever shows the first itteration of the list (the single 0) but it is sopposed to be updated in the dropdown menue after each press of the "roll the dice" button. How do i do that? from random import randint from tkinter import * root = Tk() root.title('Hobbyprojekt') count = -1 global dice dice = [0] prpp= IntVar() diceshow=Label() #defining funtions for buttons def roll(): global count global diceshow global dice count +=1 print(count) if count >= 1: dice=[] for x in range (0,7) : dice.append(randint(1,10)) #calculating the viable dice options for x in range (0,2) : dice.remove(min(dice)) if count >= 1: diceshow.destroy() print("destroyed") diceshow=Label(root, text=dice) diceshow.grid(row=0,column=1) print(dice) print(dice[1]) print(dice[2]) print(dice[3]) #setting up the test gui button1 = Button(root, text='Roll the dice', command=roll) label1= Label(text='choice1') label2= Label(text='choice2') label3= Label(text='choice3') label4= Label(text='choice4') label5= Label(text='choice5') label6= Label(text='choice6') dd1= OptionMenu(root,prpp,*dice) dd2= OptionMenu(root,prpp,*dice) dd3= OptionMenu(root,prpp,*dice) dd4= OptionMenu(root,prpp,*dice) dd5= OptionMenu(root,prpp,*dice) dd6= OptionMenu(root,prpp,*dice) #setting layout button1.grid(row=0,column=0) label1.grid(row=1,column=0) label2.grid(row=2,column=0) label3.grid(row=3,column=0) label4.grid(row=4,column=0) label5.grid(row=5,column=0) label6.grid(row=6,column=0) dd1.grid(row=1, column=1) dd2.grid(row=2,column=1) dd3.grid(row=3,column=1) dd4.grid(row=4,column=1) dd5.grid(row=5,column=1) dd6.grid(row=6,column=1) root.mainloop() So i'm acctually lost for ideas on what to do since i am fairly new to python. Only thing i could think of is creating the dropdown menus within the "diceroll" button definition but that is not reall what would want to do. Thanks in advance. edit: fixed spelling. A: Is this is what you want? I moved optionmenu into roll() function from random import randint from tkinter import * root = Tk() root.title('Hobbyprojekt') count = -1 #global dice dice = [0] prpp= IntVar() diceshow=Label() #defining funtions for buttons def roll(): global count global diceshow global dice count +=1 print(count) if count >= 1: dice=[] for x in range (0,7) : dice.append(randint(1,10)) #calculating the viable dice options for x in range (0,2) : dice.remove(min(dice)) if count >= 1: diceshow.destroy() print("destroyed") diceshow=Label(root, text=dice) diceshow.grid(row=0,column=1) print(dice) print(dice[1]) print(dice[2]) print(dice[3]) dd1= OptionMenu(root,prpp,dice[0]) dd2= OptionMenu(root,prpp,dice[1]) dd3= OptionMenu(root,prpp,dice[2]) dd4= OptionMenu(root,prpp,dice[3]) dd5= OptionMenu(root,prpp,dice[4]) dd6= OptionMenu(root,prpp,dice[5]) dd1.grid(row=1, column=1) dd2.grid(row=2,column=1) dd3.grid(row=3,column=1) dd4.grid(row=4,column=1) dd5.grid(row=5,column=1) dd6.grid(row=6,column=1) #setting up the test gui button1 = Button(root, text='Roll the dice', command=roll) label1= Label(text='choice1') label2= Label(text='choice2') label3= Label(text='choice3') label4= Label(text='choice4') label5= Label(text='choice5') label6= Label(text='choice6') #setting layout button1.grid(row=0,column=0) label1.grid(row=1,column=0) label2.grid(row=2,column=0) label3.grid(row=3,column=0) label4.grid(row=4,column=0) label5.grid(row=5,column=0) label6.grid(row=6,column=0) root.mainloop() Result before: Result after:
updating options in optionmenu Tkinter
I am currently writing on a small hobby projekt and i have a problem concerning my list "dice" while using the dropdown menu it only ever shows the first itteration of the list (the single 0) but it is sopposed to be updated in the dropdown menue after each press of the "roll the dice" button. How do i do that? from random import randint from tkinter import * root = Tk() root.title('Hobbyprojekt') count = -1 global dice dice = [0] prpp= IntVar() diceshow=Label() #defining funtions for buttons def roll(): global count global diceshow global dice count +=1 print(count) if count >= 1: dice=[] for x in range (0,7) : dice.append(randint(1,10)) #calculating the viable dice options for x in range (0,2) : dice.remove(min(dice)) if count >= 1: diceshow.destroy() print("destroyed") diceshow=Label(root, text=dice) diceshow.grid(row=0,column=1) print(dice) print(dice[1]) print(dice[2]) print(dice[3]) #setting up the test gui button1 = Button(root, text='Roll the dice', command=roll) label1= Label(text='choice1') label2= Label(text='choice2') label3= Label(text='choice3') label4= Label(text='choice4') label5= Label(text='choice5') label6= Label(text='choice6') dd1= OptionMenu(root,prpp,*dice) dd2= OptionMenu(root,prpp,*dice) dd3= OptionMenu(root,prpp,*dice) dd4= OptionMenu(root,prpp,*dice) dd5= OptionMenu(root,prpp,*dice) dd6= OptionMenu(root,prpp,*dice) #setting layout button1.grid(row=0,column=0) label1.grid(row=1,column=0) label2.grid(row=2,column=0) label3.grid(row=3,column=0) label4.grid(row=4,column=0) label5.grid(row=5,column=0) label6.grid(row=6,column=0) dd1.grid(row=1, column=1) dd2.grid(row=2,column=1) dd3.grid(row=3,column=1) dd4.grid(row=4,column=1) dd5.grid(row=5,column=1) dd6.grid(row=6,column=1) root.mainloop() So i'm acctually lost for ideas on what to do since i am fairly new to python. Only thing i could think of is creating the dropdown menus within the "diceroll" button definition but that is not reall what would want to do. Thanks in advance. edit: fixed spelling.
[ "Is this is what you want? I moved optionmenu into roll() function\nfrom random import randint\nfrom tkinter import *\n\nroot = Tk()\nroot.title('Hobbyprojekt')\n\ncount = -1\n#global dice\ndice = [0]\nprpp= IntVar() \ndiceshow=Label()\n#defining funtions for buttons \ndef roll():\n global count\n global diceshow\n global dice\n count +=1\n print(count)\n if count >= 1:\n dice=[]\n for x in range (0,7) :\n dice.append(randint(1,10))\n \n #calculating the viable dice options\n for x in range (0,2) :\n dice.remove(min(dice))\n\n if count >= 1:\n diceshow.destroy()\n print(\"destroyed\")\n \n diceshow=Label(root, text=dice)\n diceshow.grid(row=0,column=1)\n print(dice)\n print(dice[1])\n print(dice[2])\n print(dice[3])\n dd1= OptionMenu(root,prpp,dice[0])\n dd2= OptionMenu(root,prpp,dice[1])\n dd3= OptionMenu(root,prpp,dice[2])\n dd4= OptionMenu(root,prpp,dice[3])\n dd5= OptionMenu(root,prpp,dice[4])\n dd6= OptionMenu(root,prpp,dice[5])\n\n dd1.grid(row=1, column=1)\n dd2.grid(row=2,column=1)\n dd3.grid(row=3,column=1)\n dd4.grid(row=4,column=1)\n dd5.grid(row=5,column=1)\n dd6.grid(row=6,column=1)\n\n \n\n#setting up the test gui\nbutton1 = Button(root, text='Roll the dice', command=roll)\nlabel1= Label(text='choice1')\nlabel2= Label(text='choice2')\nlabel3= Label(text='choice3')\nlabel4= Label(text='choice4')\nlabel5= Label(text='choice5')\nlabel6= Label(text='choice6')\n \n#setting layout\nbutton1.grid(row=0,column=0)\n\nlabel1.grid(row=1,column=0)\nlabel2.grid(row=2,column=0)\nlabel3.grid(row=3,column=0)\nlabel4.grid(row=4,column=0)\nlabel5.grid(row=5,column=0)\nlabel6.grid(row=6,column=0)\n \n\nroot.mainloop()\n\nResult before:\n\nResult after:\n\n" ]
[ 0 ]
[]
[]
[ "optionmenu", "python", "tkinter", "updating" ]
stackoverflow_0074625320_optionmenu_python_tkinter_updating.txt
Q: My entire folder with in-built program files is a working directory for git, it has been tracking all my files, I want to remove .git entirely I do not know to use git bash I ended up doing something and now it's been tracking all my files, what should I do to remove that entirely from my system? The files that are being tracked: I thought of using rm -fr .git But I am scared if it would delete files from my local system
My entire folder with in-built program files is a working directory for git, it has been tracking all my files, I want to remove .git entirely
I do not know to use git bash I ended up doing something and now it's been tracking all my files, what should I do to remove that entirely from my system? The files that are being tracked: I thought of using rm -fr .git But I am scared if it would delete files from my local system
[]
[]
[ "I'm assuming everything is being push onto a Remote Repository. In that case used:\ngit remote remove origin\n\n" ]
[ -1 ]
[ ".git_folder", "git", "git_bash" ]
stackoverflow_0074674023_.git_folder_git_git_bash.txt
Q: Programmatically get path to Application Support folder I'm trying to get an NSString for the user's Application Support folder. I know I can do NSString *path = @"~/Library/Application Support"; but this doesn't seem very elegant. I've played around with using NSSearchPathForDirectoriesInDomains but it seems to be quite long-winded and creates several unnecessary objects (at least, my implementation of it does). Is there a simple way to do this? A: This is outdated, for current best practice use FileManager.default.urls(for:in:) as in the comment by @andyvn22 below. the Best practice is to use NSSearchPathForDirectoriesInDomains with NSApplicationSupportDirectory as "long winded" as it may be. Example: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, NSUserDomainMask, YES); NSString *applicationSupportDirectory = [paths firstObject]; NSLog(@"applicationSupportDirectory: '%@'", applicationSupportDirectory); NSLog output: applicationSupportDirectory: '/Volumes/User/me/Library/Application Support' A: Swift: print(NSHomeDirectory()) or print(FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first) and let yourString = String(FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first) A: Swift 3: FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first A: Just to be sure people will start using the recommended way of doing this: - (NSArray<NSURL *> * _Nonnull)URLsForDirectory:(NSSearchPathDirectory)directory inDomains:(NSSearchPathDomainMask)domainMask Expanded example from documentation: - (NSURL*)applicationDataDirectory { NSFileManager* sharedFM = [NSFileManager defaultManager]; NSArray* possibleURLs = [sharedFM URLsForDirectory:NSApplicationSupportDirectory inDomains:NSUserDomainMask]; NSURL* appSupportDir = nil; NSURL* appDirectory = nil; if ([possibleURLs count] >= 1) { // Use the first directory (if multiple are returned) appSupportDir = [possibleURLs objectAtIndex:0]; } // If a valid app support directory exists, add the // app's bundle ID to it to specify the final directory. if (appSupportDir) { NSString* appBundleID = [[NSBundle mainBundle] bundleIdentifier]; appDirectory = [appSupportDir URLByAppendingPathComponent:appBundleID]; } return appDirectory; } Proof link: https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/AccessingFilesandDirectories/AccessingFilesandDirectories.html#//apple_ref/doc/uid/TP40010672-CH3-SW3 A: This works for me: NSError *error; NSURL* appSupportDir = [[NSFileManager defaultManager] URLForDirectory:NSApplicationSupportDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:&error]; A: This is what I use to get the database. Got it from the Stanford class. It might help somebody. NSURL *url = [[[NSFileManager URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask] lastObject]; url = [url URLByAppendingPathComponent:@"database_name"]; NSLog(@"Database URL: %@",url); A: Create separate objective C class for reading and writing into documents directory. I will avoid code re-writing. Below is my version of it. //Directory.h #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #define PATH (NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)) #define BASEPATH (([PATH count] > 0)? [PATH objectAtIndex:0] : nil) @interface DocumentsDirectory : NSObject //Here you can also use URL path as return type and file path. +(void)removeFilesfromDocumentsDirectory:(NSString*)filename; +(NSString*)writeFiletoDocumentsDirectory:(NSString*)filename; @end #import "Directory.h" @implementation DocumentsDirectory UIAlertView *updateAlert; +(void)removeFilesfromDocumentsDirectory:(NSString*)filename { NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *filePath = [BASEPATH stringByAppendingPathComponent:filename]; NSError *error; BOOL success = [fileManager removeItemAtPath:filePath error:&error]; //Remove or delete file from documents directory. if (success) { updateAlert= [[UIAlertView alloc] initWithTitle:@"Congratulations:" message:@"File is updated successfully" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [updateAlert show]; } else { NSLog(@"Could not delete file -:%@ ",[error localizedDescription]); updateAlert= [[UIAlertView alloc] initWithTitle:@"Try again:" message:[error localizedDescription] delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [updateAlert show]; } } +(NSString*)writeFiletoDocumentsDirectory:(NSString*)filename { NSString *foldDestination = BASEPATH; NSString *filePath = [foldDestination stringByAppendingPathComponent:filename]; return filePath; } @end A: iOS 16 update: URL.applicationSupportDirectory, they've also added a static property for documentsDirectory
Programmatically get path to Application Support folder
I'm trying to get an NSString for the user's Application Support folder. I know I can do NSString *path = @"~/Library/Application Support"; but this doesn't seem very elegant. I've played around with using NSSearchPathForDirectoriesInDomains but it seems to be quite long-winded and creates several unnecessary objects (at least, my implementation of it does). Is there a simple way to do this?
[ "This is outdated, for current best practice use FileManager.default.urls(for:in:) as in the comment by @andyvn22 below.\nthe Best practice is to use NSSearchPathForDirectoriesInDomains with NSApplicationSupportDirectory as \"long winded\" as it may be.\nExample:\nNSArray *paths = NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, NSUserDomainMask, YES);\nNSString *applicationSupportDirectory = [paths firstObject];\nNSLog(@\"applicationSupportDirectory: '%@'\", applicationSupportDirectory);\n\nNSLog output:\napplicationSupportDirectory: '/Volumes/User/me/Library/Application Support'\n\n", "Swift:\nprint(NSHomeDirectory())\n\nor\nprint(FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first)\n\nand\nlet yourString = String(FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first)\n\n", "Swift 3:\nFileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first\n\n", "Just to be sure people will start using the recommended way of doing this:\n- (NSArray<NSURL *> * _Nonnull)URLsForDirectory:(NSSearchPathDirectory)directory\n inDomains:(NSSearchPathDomainMask)domainMask\n\nExpanded example from documentation:\n- (NSURL*)applicationDataDirectory {\n NSFileManager* sharedFM = [NSFileManager defaultManager];\n NSArray* possibleURLs = [sharedFM URLsForDirectory:NSApplicationSupportDirectory\n inDomains:NSUserDomainMask];\n NSURL* appSupportDir = nil;\n NSURL* appDirectory = nil;\n\n if ([possibleURLs count] >= 1) {\n // Use the first directory (if multiple are returned)\n appSupportDir = [possibleURLs objectAtIndex:0];\n }\n\n // If a valid app support directory exists, add the\n // app's bundle ID to it to specify the final directory.\n if (appSupportDir) {\n NSString* appBundleID = [[NSBundle mainBundle] bundleIdentifier];\n appDirectory = [appSupportDir URLByAppendingPathComponent:appBundleID];\n }\n\n return appDirectory;\n}\n\nProof link: https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/AccessingFilesandDirectories/AccessingFilesandDirectories.html#//apple_ref/doc/uid/TP40010672-CH3-SW3\n", "This works for me:\nNSError *error;\nNSURL* appSupportDir = [[NSFileManager defaultManager] \n URLForDirectory:NSApplicationSupportDirectory\n inDomain:NSUserDomainMask\n appropriateForURL:nil\n create:YES\n error:&error];\n\n", "This is what I use to get the database. Got it from the Stanford class. It might help somebody.\nNSURL *url = [[[NSFileManager URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask] lastObject];\nurl = [url URLByAppendingPathComponent:@\"database_name\"];\nNSLog(@\"Database URL: %@\",url);\n\n", "Create separate objective C class for reading and writing into documents directory. I will avoid code re-writing. Below is my version of it.\n//Directory.h\n#import <Foundation/Foundation.h>\n#import <UIKit/UIKit.h>\n\n#define PATH (NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES))\n#define BASEPATH (([PATH count] > 0)? [PATH objectAtIndex:0] : nil)\n\n@interface DocumentsDirectory : NSObject\n\n//Here you can also use URL path as return type and file path.\n+(void)removeFilesfromDocumentsDirectory:(NSString*)filename;\n+(NSString*)writeFiletoDocumentsDirectory:(NSString*)filename;\n@end\n\n\n#import \"Directory.h\"\n\n@implementation DocumentsDirectory\n\nUIAlertView *updateAlert;\n\n+(void)removeFilesfromDocumentsDirectory:(NSString*)filename\n{\n NSFileManager *fileManager = [NSFileManager defaultManager];\n NSString *filePath = [BASEPATH stringByAppendingPathComponent:filename];\n\n NSError *error;\n BOOL success = [fileManager removeItemAtPath:filePath error:&error]; //Remove or delete file from documents directory.\n\n if (success)\n {\n updateAlert= [[UIAlertView alloc] initWithTitle:@\"Congratulations:\" message:@\"File is updated successfully\" delegate:self cancelButtonTitle:@\"OK\" otherButtonTitles:nil];\n [updateAlert show];\n }\n else\n {\n NSLog(@\"Could not delete file -:%@ \",[error localizedDescription]);\n updateAlert= [[UIAlertView alloc] initWithTitle:@\"Try again:\" message:[error localizedDescription] delegate:self cancelButtonTitle:@\"OK\" otherButtonTitles:nil];\n [updateAlert show];\n }\n}\n\n+(NSString*)writeFiletoDocumentsDirectory:(NSString*)filename\n{\n NSString *foldDestination = BASEPATH;\n NSString *filePath = [foldDestination stringByAppendingPathComponent:filename];\n\n return filePath;\n}\n\n@end\n\n", "iOS 16 update: URL.applicationSupportDirectory, they've also added a static property for documentsDirectory\n" ]
[ 77, 41, 28, 7, 3, 0, 0, 0 ]
[]
[]
[ "cocoa", "ios", "objective_c", "swift" ]
stackoverflow_0008430777_cocoa_ios_objective_c_swift.txt
Q: I need to get the total amount by date and type of operation I currently have and array of objects with these parameters: const data = [ { amount: 700, ​​ date: "01/12/2022", ​​ description: "Test expense 3", ​​ id: "7qlck5WmRrHmpHcek3fb", ​​ time: "15:47:01", ​​ type: "expense", }, amount: 150, ​​ date: "03/12/2022", ​​ description: "Test income 3", ​​ id: "7qlck5WmRrHmpHcek3fb", ​​ time: "15:47:01", ​​ type: "income", ] This is an expense app that I am working on and I want to display some information in a chart. I have tried this function: const result = Object.values( data.reduce((acc, { type, amount, date }) => { if (!acc[date]) acc[date] = Object.assign({}, { type, amount, date }); else acc[date].amount += amount; console.log(acc[date].type); return acc; }, {}) ); But it's not returning exactly what I want and I am not sure what I need to change. Right now, this is what it returns: Object { type: "income", amount: 850, date: "01/12/2022" } ​ 1: Object { type: "income", amount: 550, date: "03/12/2022" } ​ 2: Object { type: "expense", amount: 150, date: "04/12/2022" } ​ length: 3 It does create new objects based on the date, but on top of that, I need to get the total amounts for expenses and income separately based on the type of operation. For eg: const returnedData = [ {type: 'income', amount: (sum of all incomes on that given date), date: date, type: 'expense', amount: (sum of all expenses on that given date), date: date Thanks all! A: To return the information in the format you want, you will need to make a few changes to your code. First, you can use the reduce() method to group the data by date and then calculate the total amount for each type of operation (income or expense) for each date. Then, you can use the map() method to transform the resulting object into an array of objects in the format you want. Here's an example of how you could do this: const data = [ { amount: 700, date: "01/12/2022", description: "Test expense 3", id: "7qlck5WmRrHmpHcek3fb", time: "15:47:01", type: "expense", }, { amount: 150, date: "03/12/2022", description: "Test income 3", id: "7qlck5WmRrHmpHcek3fb", time: "15:47:01", type: "income", } ]; const result = Object.values( data.reduce((acc, { type, amount, date }) => { // Check if the date already exists in the accumulator object if (!acc[date]) { // If not, create a new object for the date and initialize the // amount for each type of operation to 0 acc[date] = { date, income: 0, expense: 0, }; } // Add the amount to the appropriate property in the date object acc[date][type] += amount; return acc; }, {}) ); // Transform the resulting object into an array of objects const returnedData = result.map(({ date, income, expense }) => [ { type: 'income', amount: income, date }, { type: 'expense', amount: expense, date }, ]); console.log(returnedData); This code will group the data by date and calculate the total income and expense for each date. It will then return an array of objects in the format you want, with separate objects for income and expense for each date. The output will look like this: [ [ { type: 'income', amount: 0, date: '01/12/2022' }, { type: 'expense', amount: 700, date: '01/12/2022' } ], [ { type: 'income', amount: 150, date: '03/12/2022' }, { type: 'expense', amount: 0, date: '03/12/2022' } ] ]
I need to get the total amount by date and type of operation
I currently have and array of objects with these parameters: const data = [ { amount: 700, ​​ date: "01/12/2022", ​​ description: "Test expense 3", ​​ id: "7qlck5WmRrHmpHcek3fb", ​​ time: "15:47:01", ​​ type: "expense", }, amount: 150, ​​ date: "03/12/2022", ​​ description: "Test income 3", ​​ id: "7qlck5WmRrHmpHcek3fb", ​​ time: "15:47:01", ​​ type: "income", ] This is an expense app that I am working on and I want to display some information in a chart. I have tried this function: const result = Object.values( data.reduce((acc, { type, amount, date }) => { if (!acc[date]) acc[date] = Object.assign({}, { type, amount, date }); else acc[date].amount += amount; console.log(acc[date].type); return acc; }, {}) ); But it's not returning exactly what I want and I am not sure what I need to change. Right now, this is what it returns: Object { type: "income", amount: 850, date: "01/12/2022" } ​ 1: Object { type: "income", amount: 550, date: "03/12/2022" } ​ 2: Object { type: "expense", amount: 150, date: "04/12/2022" } ​ length: 3 It does create new objects based on the date, but on top of that, I need to get the total amounts for expenses and income separately based on the type of operation. For eg: const returnedData = [ {type: 'income', amount: (sum of all incomes on that given date), date: date, type: 'expense', amount: (sum of all expenses on that given date), date: date Thanks all!
[ "To return the information in the format you want, you will need to make a few changes to your code. First, you can use the reduce() method to group the data by date and then calculate the total amount for each type of operation (income or expense) for each date. Then, you can use the map() method to transform the resulting object into an array of objects in the format you want.\nHere's an example of how you could do this:\nconst data = [\n {\n amount: 700,\n date: \"01/12/2022\",\n description: \"Test expense 3\",\n id: \"7qlck5WmRrHmpHcek3fb\",\n time: \"15:47:01\",\n type: \"expense\",\n },\n {\n amount: 150,\n date: \"03/12/2022\",\n description: \"Test income 3\",\n id: \"7qlck5WmRrHmpHcek3fb\",\n time: \"15:47:01\",\n type: \"income\",\n }\n];\n\nconst result = Object.values(\n data.reduce((acc, { type, amount, date }) => {\n // Check if the date already exists in the accumulator object\n if (!acc[date]) {\n // If not, create a new object for the date and initialize the\n // amount for each type of operation to 0\n acc[date] = {\n date,\n income: 0,\n expense: 0,\n };\n }\n\n // Add the amount to the appropriate property in the date object\n acc[date][type] += amount;\n\n return acc;\n }, {})\n);\n\n// Transform the resulting object into an array of objects\nconst returnedData = result.map(({ date, income, expense }) => [\n { type: 'income', amount: income, date },\n { type: 'expense', amount: expense, date },\n]);\n\nconsole.log(returnedData);\n\nThis code will group the data by date and calculate the total income and expense for each date. It will then return an array of objects in the format you want, with separate objects for income and expense for each date.\nThe output will look like this:\n[\n [\n { type: 'income', amount: 0, date: '01/12/2022' },\n { type: 'expense', amount: 700, date: '01/12/2022' }\n ],\n [\n { type: 'income', amount: 150, date: '03/12/2022' },\n { type: 'expense', amount: 0, date: '03/12/2022' }\n ]\n]\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074677838_javascript_reactjs.txt
Q: How to change state of FlatList item after rendered without refreshing other item on React Native There is a problem that all item refresh when selectedIndex changed because the selectedIndex is dependency of renderItem. I want to manage selectedIndex to globally cause just one or none item's detail have to enabled. Is there any solution to refresh just one item that selectedIndex pointed? Here is my code interface ItemType { title: string; detail: string; } const FlatListTestScreen = () => { const [selectedIndex, setSelectedIndex] = useState<number | null>(null); const data: ItemType[] = [ { title: 'title1', detail: 'detail1' }, { title: 'title2', detail: 'detail2' }, { title: 'title3', detail: 'detail3' }, { title: 'title4', detail: 'detail4' }, { title: 'title5', detail: 'detail5' }, ]; const renderItem = useCallback<ListRenderItem<ItemType>>( ({ item, index }) => { console.log('render', index, selectedIndex === index); return ( <Pressable onPress={() => { console.log('press', index); setSelectedIndex(index === selectedIndex ? null : index); }}> <Text>{item.title}</Text> {selectedIndex === index && <Text>{item.detail}</Text>} </Pressable> ); }, [selectedIndex], ); return <FlatList data={data} renderItem={renderItem} />; }; LOG render 0 false LOG render 1 false LOG render 2 false LOG render 3 false LOG render 4 false LOG press 1 LOG render 0 false <- unnecessary render LOG render 1 true LOG render 2 false <- unnecessary render LOG render 3 false <- unnecessary render LOG render 4 false <- unnecessary render A: Check my answer about FlatList optimization here: react native flatlist with and without pagination In your case, you need to provide keyExtractor Also a very useful article from the official documentation: https://reactnative.dev/docs/optimizing-flatlist-configuration
How to change state of FlatList item after rendered without refreshing other item on React Native
There is a problem that all item refresh when selectedIndex changed because the selectedIndex is dependency of renderItem. I want to manage selectedIndex to globally cause just one or none item's detail have to enabled. Is there any solution to refresh just one item that selectedIndex pointed? Here is my code interface ItemType { title: string; detail: string; } const FlatListTestScreen = () => { const [selectedIndex, setSelectedIndex] = useState<number | null>(null); const data: ItemType[] = [ { title: 'title1', detail: 'detail1' }, { title: 'title2', detail: 'detail2' }, { title: 'title3', detail: 'detail3' }, { title: 'title4', detail: 'detail4' }, { title: 'title5', detail: 'detail5' }, ]; const renderItem = useCallback<ListRenderItem<ItemType>>( ({ item, index }) => { console.log('render', index, selectedIndex === index); return ( <Pressable onPress={() => { console.log('press', index); setSelectedIndex(index === selectedIndex ? null : index); }}> <Text>{item.title}</Text> {selectedIndex === index && <Text>{item.detail}</Text>} </Pressable> ); }, [selectedIndex], ); return <FlatList data={data} renderItem={renderItem} />; }; LOG render 0 false LOG render 1 false LOG render 2 false LOG render 3 false LOG render 4 false LOG press 1 LOG render 0 false <- unnecessary render LOG render 1 true LOG render 2 false <- unnecessary render LOG render 3 false <- unnecessary render LOG render 4 false <- unnecessary render
[ "Check my answer about FlatList optimization here: react native flatlist with and without pagination\nIn your case, you need to provide keyExtractor\nAlso a very useful article from the official documentation:\nhttps://reactnative.dev/docs/optimizing-flatlist-configuration\n" ]
[ 0 ]
[]
[]
[ "react_hooks", "react_native", "react_native_flatlist" ]
stackoverflow_0074664253_react_hooks_react_native_react_native_flatlist.txt
Q: Show progress while sleep in bash I wrote a simple script which must show progress while user waiting. But I get infinitive loop and seems sleep not working. What wrong in this code? #!/bin/bash spinner=( "Working " "Working. " "Working.. " "Working... " "Working...." ) while sleep 10 do for item in ${spinner[*]} do echo -en "\r$item" sleep .1 echo -en "\r \r" done done A: One idea: using the bash (system) variable SECONDS to measure our 10 seconds using a tput code for ovewriting a line eliminating the spinner[] array (since the only difference in values is the number of trailing periods) EraseToEOL=$(tput el) max=$((SECONDS + 10)) # add 10 seconds to current count while [ $SECONDS -le ${max} ] do msg='Waiting' for i in {1..5} do printf "%s" "${msg}" msg='.' sleep .1 done printf "\r${EraseToEOL}" done printf "\n" A small change to OP's current code using the max/SECONDS approach: spinner=( "Working " "Working. " "Working.. " "Working... " "Working...." ) max=$((SECONDS + 10)) while [[ ${SECONDS} -le ${max} ]] do for item in ${spinner[*]} do echo -en "\r$item" sleep .1 echo -en "\r \r" done done A: Use the in/decrement variable i to put out the array... #!/bin/bash countdown(){ spinner=( "Working " "Working. " "Working.. " "Working... " "Working...." ) i=4 if [ ${i} -lt 5 ] then while true do for i in ${i} do printf "%s \t" ${spinner[i]} sleep .1 printf "\r" sleep .1 if [ ${i} -eq 0 ] then # Here you can clean up or do what to do at zero count printf "\n" unset i unset spinner return 0 # Can be used in ${?} from parent bash else i=$((${i}-1)) fi done done return 1 # Should never be executed fi } # A funny cd ;-) cd(){ countdown && printf "%s\n" "DONE changing to "${1} # Gives out if return is 0 (${?}) unset cd cd ${1} } # cd ~ A: My method of showing progress while sleeping in bash: sleep 5 | pv -t It probably can't get any simpler than that :)
Show progress while sleep in bash
I wrote a simple script which must show progress while user waiting. But I get infinitive loop and seems sleep not working. What wrong in this code? #!/bin/bash spinner=( "Working " "Working. " "Working.. " "Working... " "Working...." ) while sleep 10 do for item in ${spinner[*]} do echo -en "\r$item" sleep .1 echo -en "\r \r" done done
[ "One idea:\n\nusing the bash (system) variable SECONDS to measure our 10 seconds\nusing a tput code for ovewriting a line\neliminating the spinner[] array (since the only difference in values is the number of trailing periods)\n\n\nEraseToEOL=$(tput el)\nmax=$((SECONDS + 10)) # add 10 seconds to current count\n\nwhile [ $SECONDS -le ${max} ]\ndo\n msg='Waiting'\n for i in {1..5}\n do\n printf \"%s\" \"${msg}\"\n msg='.'\n sleep .1\n done\n printf \"\\r${EraseToEOL}\"\ndone\nprintf \"\\n\"\n\n\nA small change to OP's current code using the max/SECONDS approach:\nspinner=(\n\"Working \"\n\"Working. \"\n\"Working.. \"\n\"Working... \"\n\"Working....\"\n)\n\nmax=$((SECONDS + 10))\n\nwhile [[ ${SECONDS} -le ${max} ]]\ndo\n for item in ${spinner[*]}\n do\n echo -en \"\\r$item\"\n sleep .1\n echo -en \"\\r \\r\"\n done\ndone\n\n", "Use the in/decrement variable i to put out the array...\n#!/bin/bash\n\ncountdown(){\nspinner=(\n\"Working \"\n\"Working. \"\n\"Working.. \"\n\"Working... \"\n\"Working....\"\n)\n\ni=4\n\nif [ ${i} -lt 5 ]\nthen\n while true\n do\n for i in ${i}\n do\n printf \"%s \\t\" ${spinner[i]}\n sleep .1\n printf \"\\r\"\n sleep .1\n if [ ${i} -eq 0 ]\n then\n # Here you can clean up or do what to do at zero count\n printf \"\\n\"\n unset i\n unset spinner\n return 0 # Can be used in ${?} from parent bash\n else\n i=$((${i}-1))\n fi\n done\n done\nreturn 1 # Should never be executed\nfi\n}\n\n# A funny cd ;-)\ncd(){\ncountdown && printf \"%s\\n\" \"DONE changing to \"${1} # Gives out if return is 0 (${?})\nunset cd\ncd ${1}\n}\n#\ncd ~\n\n", "My method of showing progress while sleeping in bash:\nsleep 5 | pv -t\n\nIt probably can't get any simpler than that :)\n" ]
[ 3, 0, 0 ]
[ "Check out this spiner\n\nOr from this project\n\n" ]
[ -1 ]
[ "bash", "linux" ]
stackoverflow_0069272871_bash_linux.txt
Q: Display Woocommerce Metadata value on Frontend I installed a plugin that displays the metadata of every post (normal posts and products) I would like to be able to display the value of a metadata on the frontend - I am designing a single post (single product) page - and there's a specific metadata field that I would like to display on that page. Any Guidance please? here's a screenshot of the metadata fields that I have: Metadata fields in the dashboard if you can guide me to display any metadata value, I will be able to do it and choose the desired fields that I want to display on the frontend. I appreciate your help so much! I tried different plugins, couldn't find a solution A: You must do the following in the file single.php or single-product.php: $value = get_post_meta( get_the_ID(), "key", true ); echo "<p>$value</p>";
Display Woocommerce Metadata value on Frontend
I installed a plugin that displays the metadata of every post (normal posts and products) I would like to be able to display the value of a metadata on the frontend - I am designing a single post (single product) page - and there's a specific metadata field that I would like to display on that page. Any Guidance please? here's a screenshot of the metadata fields that I have: Metadata fields in the dashboard if you can guide me to display any metadata value, I will be able to do it and choose the desired fields that I want to display on the frontend. I appreciate your help so much! I tried different plugins, couldn't find a solution
[ "You must do the following in the file single.php or single-product.php:\n $value = get_post_meta( get_the_ID(), \"key\", true );\n echo \"<p>$value</p>\";\n\n" ]
[ 0 ]
[]
[]
[ "metadata", "php", "woocommerce", "wordpress" ]
stackoverflow_0074675504_metadata_php_woocommerce_wordpress.txt
Q: Quill remove colour from pasted text I am using React Quill and want to remove colour from pasted text, so that quill should show text in black colour only. I have tried Clipboard module but not able to understand how to remove colour. Has anyone achieved this? Thanks, MSK A: Quill has config option with a list of allowed formats. Demo that whitelist only link and size: https://codepen.io/anon/pen/xBWved var quill = new Quill('#editor-container', { modules: { toolbar: [ [{ header: [1, 2, false] }], ['bold', 'italic', 'underline'], ['image', 'code-block'] ] }, formats: ['link', 'size'], placeholder: 'Compose an epic...', theme: 'snow' // or 'bubble' }); List of all supported formats: https://quilljs.com/docs/formats/ A: Yes, you can register the color module and whitelist: const ColorAttributor = Quill.import("attributors/style/color"); ColorAttributor.whitelist = []; Quill.register(ColorAttributor);
Quill remove colour from pasted text
I am using React Quill and want to remove colour from pasted text, so that quill should show text in black colour only. I have tried Clipboard module but not able to understand how to remove colour. Has anyone achieved this? Thanks, MSK
[ "Quill has config option with a list of allowed formats.\nDemo that whitelist only link and size:\nhttps://codepen.io/anon/pen/xBWved\nvar quill = new Quill('#editor-container', {\n modules: {\n toolbar: [\n [{ header: [1, 2, false] }],\n ['bold', 'italic', 'underline'],\n ['image', 'code-block']\n ]\n },\n formats: ['link', 'size'],\n placeholder: 'Compose an epic...',\n theme: 'snow' // or 'bubble'\n});\n\nList of all supported formats:\nhttps://quilljs.com/docs/formats/\n", "Yes, you can register the color module and whitelist:\nconst ColorAttributor = Quill.import(\"attributors/style/color\");\nColorAttributor.whitelist = [];\nQuill.register(ColorAttributor);\n\n" ]
[ 2, 0 ]
[]
[]
[ "quill", "reactjs" ]
stackoverflow_0054500014_quill_reactjs.txt
Q: Input Event Listener (On Change event Listener ) its working and Error Explanation? import React, { useState } from "react"; import "./App.css"; function App() { const [input, setInput] = useState(); function fun1(e) { // console.log(e); // console.log(e.target); // console.log(e.target.value); setInput(e.target.value); } return ( <div> <input type="text" onChange={fun1} /> <h1>{input}</h1> </div> ); } export default App; enter image description here On input field , I tried to write demo example and it is showed on screen dynamically by changing state in functional based component using React Hook (useState). But I am unable to get its working that how its actually working like when I used object e in fun1 i.e. fun1(e) console.log(e.target) console.log(e.target.value) What it actually means ? I am unable to get When instead of using e , I used this keyword directly in setInput(this.target.value) , it showed me error of undefined . Why it is not working with this ? A: Add value attribute in the input <input type="text" value={input} onChange={fun1} />
Input Event Listener (On Change event Listener ) its working and Error Explanation?
import React, { useState } from "react"; import "./App.css"; function App() { const [input, setInput] = useState(); function fun1(e) { // console.log(e); // console.log(e.target); // console.log(e.target.value); setInput(e.target.value); } return ( <div> <input type="text" onChange={fun1} /> <h1>{input}</h1> </div> ); } export default App; enter image description here On input field , I tried to write demo example and it is showed on screen dynamically by changing state in functional based component using React Hook (useState). But I am unable to get its working that how its actually working like when I used object e in fun1 i.e. fun1(e) console.log(e.target) console.log(e.target.value) What it actually means ? I am unable to get When instead of using e , I used this keyword directly in setInput(this.target.value) , it showed me error of undefined . Why it is not working with this ?
[ "Add value attribute in the input\n <input type=\"text\" value={input} onChange={fun1} />\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "object", "react_hooks", "reactjs", "this" ]
stackoverflow_0074677944_javascript_object_react_hooks_reactjs_this.txt
Q: Python 3.11 base64 error " a bytes-like object is required, not 'list' " Im tryna make a very basic password manager kinda program that's about as basic as it gets and am using base64 to encode the passwords that are getting saved , but using ` encode = base64.b64encode(read_output).encode("utf-8") print("Encrypted key: ",encode) decode = base64.b64decode(encode).decode("utf-8") print(decode) gives me an error ; File "c:\Users\Someone\OneDrive\Documents\VS Codium\pswrdmgr.py", line 152, in <module> encode = base64.b64encode(read_output).encode("utf-8") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Someone\AppData\Local\Programs\Python\Python311\Lib\base64.py", line 58, in b64encode encoded = binascii.b2a_base64(s, newline=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: a bytes-like object is required, not 'list' ` Any suggestions ? Any help is much appreciated ! I tried using other containers like a dictionary and tuples thinking they might be the issue that's troubling base64 but the problem remains .. A: You should first encode, then pass it to the function: (assuming read_output is of type list. It will also work with all of the basic types objects) encode = base64.b64encode(str(read_output).encode("utf-8")) print("Encrypted key: ",encode) decode = base64.b64decode(encode).decode("utf-8") print(decode) ```
Python 3.11 base64 error " a bytes-like object is required, not 'list' "
Im tryna make a very basic password manager kinda program that's about as basic as it gets and am using base64 to encode the passwords that are getting saved , but using ` encode = base64.b64encode(read_output).encode("utf-8") print("Encrypted key: ",encode) decode = base64.b64decode(encode).decode("utf-8") print(decode) gives me an error ; File "c:\Users\Someone\OneDrive\Documents\VS Codium\pswrdmgr.py", line 152, in <module> encode = base64.b64encode(read_output).encode("utf-8") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Someone\AppData\Local\Programs\Python\Python311\Lib\base64.py", line 58, in b64encode encoded = binascii.b2a_base64(s, newline=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: a bytes-like object is required, not 'list' ` Any suggestions ? Any help is much appreciated ! I tried using other containers like a dictionary and tuples thinking they might be the issue that's troubling base64 but the problem remains ..
[ "You should first encode, then pass it to the function:\n(assuming read_output is of type list. It will also work with all of the basic types objects)\nencode = base64.b64encode(str(read_output).encode(\"utf-8\"))\nprint(\"Encrypted key: \",encode)\ndecode = base64.b64decode(encode).decode(\"utf-8\")\nprint(decode) ```\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074677941_python_python_3.x.txt
Q: How to view Gekko variables/parameters for debug purposes? I have a fitting task where I am using GEKKO. There are a lot of variables, arrays of variables, some variables that must contain arrays, and so on. I didn't have success with the fitting, so I need to do step-by-step verification of all parameters that I am providing for GEKKO and all the calculated intermediate values. Is there a way to print out the values of each variable for debugging purposes? Or to view the values of the variables in line-by-line execution? for example, I have an array that is saved like a variable ro: phi = model.Intermediate( c * ro) # phase shift where c is some constant defined somewhere above in the model definition. How can I view the values inside phi that will be used in the next steps? I need to view/save all the values of all variables/constants/intermediates used during the model creation - before a try to solve. Is it possible? A: Turn up the DIAGLEVEL to 2 or higher to produce diagnostic files in the run directory m.path. from gekko import GEKKO m = GEKKO(remote=False) c = 2 x = m.Param(3,name='x') ro = m.Var(value=4,lb=0,ub=10,name='ro') y = m.Var() phi = m.Intermediate(c*ro,name='phi') m.Equation(y==phi**2+x) m.Maximize(y) m.options.SOLVER = 1 m.options.DIAGLEVEL=2 m.open_folder() m.solve() Here is a summary of the diagnostic files that are produced: Variables, Equations, Jacobian, Lagrange Multipliers, Objective apm_eqn.txt apm_jac.txt apm_jac_fv.txt apm_lam.txt apm_lbt.txt apm_obj.txt apm_obj_grad.txt apm_var.txt Solver Output and Options APOPT.out apopt_current_options.opt Model File gk_model0.apm Data File gk_model0.csv Options Files gk_model0.dbs options.json Specification File for FV, MV, SV, CV gk_model0.info Inputs to the Model dbs_read.rpt input_defaults.dbs input_gk_model0.dbs input_measurements.dbs input_overrides.dbs measurements.dbs Results rto.t0 results.csv results.json gk_model0_r_2022y12m04d08h12m28.509s.t0 Initialization Steps Before Solve rto_1.t0 rto_2.t0 rto_3.t0 rto_3_eqn.txt rto_3_eqn_var.txt rto_3_var.t0 Reports After Solve rto_4.t0 rto_4_eqn.txt rto_4_eqn_var.txt rto_4_var.t0 The files of interest for you are likely the rto* initialization files. The name changes based on the IMODE that you run. It is mpu* for your application for a Model Parameter Update with IMODE=2.
How to view Gekko variables/parameters for debug purposes?
I have a fitting task where I am using GEKKO. There are a lot of variables, arrays of variables, some variables that must contain arrays, and so on. I didn't have success with the fitting, so I need to do step-by-step verification of all parameters that I am providing for GEKKO and all the calculated intermediate values. Is there a way to print out the values of each variable for debugging purposes? Or to view the values of the variables in line-by-line execution? for example, I have an array that is saved like a variable ro: phi = model.Intermediate( c * ro) # phase shift where c is some constant defined somewhere above in the model definition. How can I view the values inside phi that will be used in the next steps? I need to view/save all the values of all variables/constants/intermediates used during the model creation - before a try to solve. Is it possible?
[ "Turn up the DIAGLEVEL to 2 or higher to produce diagnostic files in the run directory m.path.\nfrom gekko import GEKKO\nm = GEKKO(remote=False)\nc = 2\nx = m.Param(3,name='x')\nro = m.Var(value=4,lb=0,ub=10,name='ro')\ny = m.Var()\nphi = m.Intermediate(c*ro,name='phi')\nm.Equation(y==phi**2+x)\nm.Maximize(y)\nm.options.SOLVER = 1\nm.options.DIAGLEVEL=2\nm.open_folder()\nm.solve()\n\nHere is a summary of the diagnostic files that are produced:\nVariables, Equations, Jacobian, Lagrange Multipliers, Objective\n\napm_eqn.txt\napm_jac.txt\napm_jac_fv.txt\napm_lam.txt\napm_lbt.txt\napm_obj.txt\napm_obj_grad.txt\napm_var.txt\n\nSolver Output and Options\n\nAPOPT.out\napopt_current_options.opt\n\nModel File\n\ngk_model0.apm\n\nData File\n\ngk_model0.csv\n\nOptions Files\n\ngk_model0.dbs\noptions.json\n\nSpecification File for FV, MV, SV, CV\n\ngk_model0.info\n\nInputs to the Model\n\ndbs_read.rpt\ninput_defaults.dbs\ninput_gk_model0.dbs\ninput_measurements.dbs\ninput_overrides.dbs\nmeasurements.dbs\n\nResults\n\nrto.t0\nresults.csv\nresults.json\ngk_model0_r_2022y12m04d08h12m28.509s.t0\n\nInitialization Steps Before Solve\n\nrto_1.t0\nrto_2.t0\nrto_3.t0\nrto_3_eqn.txt\nrto_3_eqn_var.txt\nrto_3_var.t0\n\nReports After Solve\n\nrto_4.t0\nrto_4_eqn.txt\nrto_4_eqn_var.txt\nrto_4_var.t0\n\nThe files of interest for you are likely the rto* initialization files. The name changes based on the IMODE that you run. It is mpu* for your application for a Model Parameter Update with IMODE=2.\n" ]
[ 1 ]
[]
[]
[ "gekko", "optimization", "python" ]
stackoverflow_0074677526_gekko_optimization_python.txt
Q: Edit running script and replace an array I'm writing a script that should update itself, but I'm stuck. Basically I have to replace a variable containing an RGB color array (which value changes during this script) and overwrite this variable in the .jsx file. For clarity I give an example: var RGBColor = [0, 0, 0]; //Black RGB color var RGBColor2 = RGBColor; // The script runs and change RGBColor2 value to [0.2, 0.55, 0.2] var UpdateFile = function () { var JSX = File ($.fileName); JSX.open ("e"); JSX.write ((JSX.read ()).replace (RGBColor, RGBColor2)); JSX.close (); } UpdateFile (); This is what I should theoretically do, but I can't replace the variable in any way (I tried everything, even with the method "open ("r")", "read", "close", "open ("w")", "write"). Does anyone know how to make this script work or know a better way to write it?. UPDATE The final script will be a scriptui panel with six sliders, three for the first color (R, G, B), and three for the second color. When the "Apply" button is pressed, the script will have to replace the rgb variables inside the script. After messing with Ghoul Fool's code, I decided to replace the variables and not the arrays because it seemed more difficult. However I cannot overwrite the variables. Here is the code: var Red1 = 0; var Green1 = 0; var Blue1 = 0; var Red2 = 1; var Green2 = 1; var Blue2 = 1; var FirstColor = [Red1, Green1, Blue1]; var SecondColor = [Red2, Green2, Blue2]; var MainPanel = new Window ("dialog", "Panel1"); var Text1 = MainPanel.add ("statictext"); Text1.text = "First color"; var RSlider1 = MainPanel.add ("slider"); RSlider1.minvalue = 0; RSlider1.maxvalue = 255; RSlider1.value = Math.round (FirstColor[0] * 255); var RNumber1 = MainPanel.add ("statictext", undefined, RSlider1.value); RNumber1.preferredSize.width = 25; RNumber1.graphics.foregroundColor = RNumber1.graphics.newPen (RNumber1.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber1.graphics.disabledForegroundColor = RNumber1.graphics.foregroundColor; RNumber1.graphics.font = ScriptUI.newFont (RNumber1.graphics.font.name, "Bold", RNumber1.graphics.font.size); var GSlider1 = MainPanel.add ("slider"); GSlider1.minvalue = 0; GSlider1.maxvalue = 255; GSlider1.value = Math.round (FirstColor[1] * 255); var GNumber1 = MainPanel.add ("statictext", undefined, GSlider1.value); GNumber1.preferredSize.width = 25; GNumber1.graphics.foregroundColor = GNumber1.graphics.newPen (GNumber1.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber1.graphics.disabledForegroundColor = GNumber1.graphics.foregroundColor; GNumber1.graphics.font = ScriptUI.newFont (GNumber1.graphics.font.name, "Bold", GNumber1.graphics.font.size); var BSlider1 = MainPanel.add ("slider"); BSlider1.minvalue = 0; BSlider1.maxvalue = 255; BSlider1.value = Math.round (FirstColor[2] * 255); var BNumber1 = MainPanel.add ("statictext", undefined, BSlider1.value); BNumber1.preferredSize.width = 25; BNumber1.graphics.foregroundColor = BNumber1.graphics.newPen (BNumber1.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber1.graphics.disabledForegroundColor = BNumber1.graphics.foregroundColor; BNumber1.graphics.font = ScriptUI.newFont (BNumber1.graphics.font.name, "Bold", BNumber1.graphics.font.size); RSlider1.onChanging = GSlider1.onChanging = BSlider1.onChanging = function () { RNumber1.text = Math.round (RSlider1.value); GNumber1.text = Math.round (GSlider1.value); BNumber1.text = Math.round (BSlider1.value); Red1 = Math.floor ((Math.round (RSlider1.value) / 255) * 100) / 100; Green1 = Math.floor ((Math.round (GSlider1.value) / 255) * 100) / 100; Blue1 = Math.floor ((Math.round (BSlider1.value) / 255) * 100) / 100; } var Text2 = MainPanel.add ("statictext"); Text2.text = "Second color"; var RSlider2 = MainPanel.add ("slider"); RSlider2.minvalue = 0; RSlider2.maxvalue = 255; RSlider2.value = Math.round (SecondColor[0] * 255); var RNumber2 = MainPanel.add ("statictext", undefined, RSlider2.value); RNumber2.preferredSize.width = 25; RNumber2.graphics.foregroundColor = RNumber2.graphics.newPen (RNumber2.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber2.graphics.disabledForegroundColor = RNumber2.graphics.foregroundColor; RNumber2.graphics.font = ScriptUI.newFont (RNumber2.graphics.font.name, "Bold", RNumber2.graphics.font.size); var GSlider2 = MainPanel.add ("slider"); GSlider2.minvalue = 0; GSlider2.maxvalue = 255; GSlider2.value = Math.round (SecondColor[1] * 255); var GNumber2 = MainPanel.add ("statictext", undefined, GSlider2.value); GNumber2.preferredSize.width = 25; GNumber2.graphics.foregroundColor = GNumber2.graphics.newPen (GNumber2.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber2.graphics.disabledForegroundColor = GNumber2.graphics.foregroundColor; GNumber2.graphics.font = ScriptUI.newFont (GNumber2.graphics.font.name, "Bold", GNumber2.graphics.font.size); var BSlider2 = MainPanel.add ("slider"); BSlider2.minvalue = 0; BSlider2.maxvalue = 255; BSlider2.value = Math.round (SecondColor[2] * 255); var BNumber2 = MainPanel.add ("statictext", undefined, BSlider2.value); BNumber2.preferredSize.width = 25; BNumber2.graphics.foregroundColor = BNumber2.graphics.newPen (BNumber2.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber2.graphics.disabledForegroundColor = BNumber2.graphics.foregroundColor; BNumber2.graphics.font = ScriptUI.newFont (BNumber2.graphics.font.name, "Bold", BNumber2.graphics.font.size); RSlider2.onChanging = GSlider2.onChanging = BSlider2.onChanging = function () { RNumber2.text = Math.round (RSlider2.value); GNumber2.text = Math.round (GSlider2.value); BNumber2.text = Math.round (BSlider2.value); Red2 = Math.floor ((Math.round (RSlider2.value) / 255) * 100) / 100; Green2 = Math.floor ((Math.round (GSlider2.value) / 255) * 100) / 100; Blue2 = Math.floor ((Math.round (BSlider2.value) / 255) * 100) / 100; } var Apply = MainPanel.add ("button", undefined, "Apply changes"); Apply.onClick = function () { var JSX = File ($.fileName); JSX.open ("r"); var JSXXX = JSX.read (); JSX.close (); JSX.open ("w"); // This part is not clear to me. What should I replace the colors with? JSXXX = JSXXX.replace (Red1, FirstColor [0]); JSXXX = JSXXX.replace (Green1, FirstColor [1]); JSXXX = JSXXX.replace (Blue1, FirstColor [2]); JSXXX = JSXXX.replace (Red2, SecondColor [0]); JSXXX = JSXXX.replace (Green2, SecondColor [1]); JSXXX = JSXXX.replace (Blue2, SecondColor [2]); JSX.write(JSXXX); JSX.close (); MainPanel.close (); } alert ("Color 1 : " + Red1 + " - " + Green1 + " - " + Blue1); alert ("Color 2 : " + Red2 + " - " + Green2 + " - " + Blue2); MainPanel.show (); The dialog will be included in the final script and will be used to change the color of the dialog itself. A: Not quite sure what you are after (see notes above). However, this will load in a second jsx file and change its value. Just not the same one as it's running. Your script fails because you don't update the RGB value var RGBColor2 = RGBColor; Secondly, and I'm nor sure why, but the string replace wasn't working. A regular expression will replace the text however. var RGBColor = [0, 0, 0]; //Black RGB color var RGBColor2 = [0.2, 0.55, 0.2]; // The script runs and change RGBColor2 value to [0.2, 0.55, 0.2] var UpdateFile = function () { var myFile = "C:\\temp\\my_temp.jsx" var JSX = File (myFile); JSX.open ("e"); //replaces contents var r = JSX.read (); var regEx = new RegExp(/\[0, 0, 0\]/gmi); r = r.replace(regEx, RGBColor2); JSX.write(r); JSX.close (); } UpdateFile (); A: As Ghoul Fool suggested to me, I tried to post a question on community.adobe.com, but to date the "Actions and Scripting" section seems not very popular. In the end I was unable to do what I wanted, so I opted for the creation of an external XML file that the script can read and write. Here is the code: var XMLFile = File (Folder.desktop + "/config.xml"); if (!XMLFile.exists) { Create (); } XMLFile.open ("r"); var XMLObj = XML (XMLFile.read ()); XMLFile.close (); var Red1 = XMLObj["firstred"]; var Green1 = XMLObj["firstgreen"]; var Blue1 = XMLObj["firstrblue"]; var Red2 = XMLObj["secondred"]; var Green2 = XMLObj["secondgreen"]; var Blue2 = XMLObj["secondblue"]; var FirstColor = [Number (Red1), Number (Green1), Number (Blue1)]; var SecondColor = [Number (Red2), Number (Green2), Number (Blue2)]; var MainPanel = new Window ("dialog", "Panel1"); var Text1 = MainPanel.add ("statictext"); Text1.text = "First color"; var RSlider1 = MainPanel.add ("slider"); RSlider1.minvalue = 0; RSlider1.maxvalue = 255; RSlider1.value = Math.round (Red1 * 255); var RNumber1 = MainPanel.add ("statictext", undefined, RSlider1.value); RNumber1.preferredSize.width = 25; RNumber1.graphics.foregroundColor = RNumber1.graphics.newPen (RNumber1.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber1.graphics.disabledForegroundColor = RNumber1.graphics.foregroundColor; RNumber1.graphics.font = ScriptUI.newFont (RNumber1.graphics.font.name, "Bold", RNumber1.graphics.font.size); var GSlider1 = MainPanel.add ("slider"); GSlider1.minvalue = 0; GSlider1.maxvalue = 255; GSlider1.value = Math.round (Green1 * 255); var GNumber1 = MainPanel.add ("statictext", undefined, GSlider1.value); GNumber1.preferredSize.width = 25; GNumber1.graphics.foregroundColor = GNumber1.graphics.newPen (GNumber1.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber1.graphics.disabledForegroundColor = GNumber1.graphics.foregroundColor; GNumber1.graphics.font = ScriptUI.newFont (GNumber1.graphics.font.name, "Bold", GNumber1.graphics.font.size); var BSlider1 = MainPanel.add ("slider"); BSlider1.minvalue = 0; BSlider1.maxvalue = 255; BSlider1.value = Math.round (Blue1 * 255); var BNumber1 = MainPanel.add ("statictext", undefined, BSlider1.value); BNumber1.preferredSize.width = 25; BNumber1.graphics.foregroundColor = BNumber1.graphics.newPen (BNumber1.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber1.graphics.disabledForegroundColor = BNumber1.graphics.foregroundColor; BNumber1.graphics.font = ScriptUI.newFont (BNumber1.graphics.font.name, "Bold", BNumber1.graphics.font.size); RSlider1.onChanging = GSlider1.onChanging = BSlider1.onChanging = function () { RNumber1.text = Math.round (RSlider1.value); GNumber1.text = Math.round (GSlider1.value); BNumber1.text = Math.round (BSlider1.value); Red1 = Math.floor ((RSlider1.value / 255) * 100) / 100; Green1 = Math.floor ((GSlider1.value / 255) * 100) / 100; Blue1 = Math.floor ((BSlider1.value / 255) * 100) / 100; } var Text2 = MainPanel.add ("statictext"); Text2.text = "Second color"; var RSlider2 = MainPanel.add ("slider"); RSlider2.minvalue = 0; RSlider2.maxvalue = 255; RSlider2.value = Math.round (Red2 * 255); var RNumber2 = MainPanel.add ("statictext", undefined, RSlider2.value); RNumber2.preferredSize.width = 25; RNumber2.graphics.foregroundColor = RNumber2.graphics.newPen (RNumber2.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber2.graphics.disabledForegroundColor = RNumber2.graphics.foregroundColor; RNumber2.graphics.font = ScriptUI.newFont (RNumber2.graphics.font.name, "Bold", RNumber2.graphics.font.size); var GSlider2 = MainPanel.add ("slider"); GSlider2.minvalue = 0; GSlider2.maxvalue = 255; GSlider2.value = Math.round (Green2 * 255); var GNumber2 = MainPanel.add ("statictext", undefined, GSlider2.value); GNumber2.preferredSize.width = 25; GNumber2.graphics.foregroundColor = GNumber2.graphics.newPen (GNumber2.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber2.graphics.disabledForegroundColor = GNumber2.graphics.foregroundColor; GNumber2.graphics.font = ScriptUI.newFont (GNumber2.graphics.font.name, "Bold", GNumber2.graphics.font.size); var BSlider2 = MainPanel.add ("slider"); BSlider2.minvalue = 0; BSlider2.maxvalue = 255; BSlider2.value = Math.round (Blue2 * 255); var BNumber2 = MainPanel.add ("statictext", undefined, BSlider2.value); BNumber2.preferredSize.width = 25; BNumber2.graphics.foregroundColor = BNumber2.graphics.newPen (BNumber2.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber2.graphics.disabledForegroundColor = BNumber2.graphics.foregroundColor; BNumber2.graphics.font = ScriptUI.newFont (BNumber2.graphics.font.name, "Bold", BNumber2.graphics.font.size); RSlider2.onChanging = GSlider2.onChanging = BSlider2.onChanging = function () { RNumber2.text = Math.round (RSlider2.value); GNumber2.text = Math.round (GSlider2.value); BNumber2.text = Math.round (BSlider2.value); Red2 = Math.floor ((RSlider2.value / 255) * 100) / 100; Green2 = Math.floor ((GSlider2.value / 255) * 100) / 100; Blue2 = Math.floor ((BSlider2.value / 255) * 100) / 100; } var Apply = MainPanel.add ("button", undefined, "Apply changes"); Apply.onClick = function () { var XML = new File (Folder.desktop + "/config.xml"); XML.open ('w'); XML.writeln ('<?xml version="1.0" encoding="utf-8"?>'); XML.writeln ("<variables>"); XML.writeln (" <firstred>" + Red1 + "</firstred>"); XML.writeln (" <firstgreen>" + Green1 + "</firstgreen>"); XML.writeln (" <firstrblue>" + Blue1 + "</firstrblue>"); XML.writeln (" <secondred>" + Red2 + "</secondred>"); XML.writeln (" <secondgreen>" + Green2 + "</secondgreen>"); XML.writeln (" <secondblue>" + Blue2 + "</secondblue>"); XML.writeln ("</variables>"); XML.close (); MainPanel.close (); } alert ("Color 1 : " + Red1 + " - " + Green1 + " - " + Blue1); alert ("Color 2 : " + Red2 + " - " + Green2 + " - " + Blue2); MainPanel.show (); function Create () { var XML = new File (Folder.desktop + "/config.xml"); XML.open ('w'); XML.writeln ('<?xml version="1.0" encoding="utf-8"?>'); XML.writeln ("<variables>"); XML.writeln (" <firstred>0</firstred>"); XML.writeln (" <firstgreen>0</firstgreen>"); XML.writeln (" <firstrblue>0</firstrblue>"); XML.writeln (" <secondred>1</secondred>"); XML.writeln (" <secondgreen>1</secondgreen>"); XML.writeln (" <secondblue>1</secondblue>"); XML.writeln ("</variables>"); XML.close (); }
Edit running script and replace an array
I'm writing a script that should update itself, but I'm stuck. Basically I have to replace a variable containing an RGB color array (which value changes during this script) and overwrite this variable in the .jsx file. For clarity I give an example: var RGBColor = [0, 0, 0]; //Black RGB color var RGBColor2 = RGBColor; // The script runs and change RGBColor2 value to [0.2, 0.55, 0.2] var UpdateFile = function () { var JSX = File ($.fileName); JSX.open ("e"); JSX.write ((JSX.read ()).replace (RGBColor, RGBColor2)); JSX.close (); } UpdateFile (); This is what I should theoretically do, but I can't replace the variable in any way (I tried everything, even with the method "open ("r")", "read", "close", "open ("w")", "write"). Does anyone know how to make this script work or know a better way to write it?. UPDATE The final script will be a scriptui panel with six sliders, three for the first color (R, G, B), and three for the second color. When the "Apply" button is pressed, the script will have to replace the rgb variables inside the script. After messing with Ghoul Fool's code, I decided to replace the variables and not the arrays because it seemed more difficult. However I cannot overwrite the variables. Here is the code: var Red1 = 0; var Green1 = 0; var Blue1 = 0; var Red2 = 1; var Green2 = 1; var Blue2 = 1; var FirstColor = [Red1, Green1, Blue1]; var SecondColor = [Red2, Green2, Blue2]; var MainPanel = new Window ("dialog", "Panel1"); var Text1 = MainPanel.add ("statictext"); Text1.text = "First color"; var RSlider1 = MainPanel.add ("slider"); RSlider1.minvalue = 0; RSlider1.maxvalue = 255; RSlider1.value = Math.round (FirstColor[0] * 255); var RNumber1 = MainPanel.add ("statictext", undefined, RSlider1.value); RNumber1.preferredSize.width = 25; RNumber1.graphics.foregroundColor = RNumber1.graphics.newPen (RNumber1.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber1.graphics.disabledForegroundColor = RNumber1.graphics.foregroundColor; RNumber1.graphics.font = ScriptUI.newFont (RNumber1.graphics.font.name, "Bold", RNumber1.graphics.font.size); var GSlider1 = MainPanel.add ("slider"); GSlider1.minvalue = 0; GSlider1.maxvalue = 255; GSlider1.value = Math.round (FirstColor[1] * 255); var GNumber1 = MainPanel.add ("statictext", undefined, GSlider1.value); GNumber1.preferredSize.width = 25; GNumber1.graphics.foregroundColor = GNumber1.graphics.newPen (GNumber1.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber1.graphics.disabledForegroundColor = GNumber1.graphics.foregroundColor; GNumber1.graphics.font = ScriptUI.newFont (GNumber1.graphics.font.name, "Bold", GNumber1.graphics.font.size); var BSlider1 = MainPanel.add ("slider"); BSlider1.minvalue = 0; BSlider1.maxvalue = 255; BSlider1.value = Math.round (FirstColor[2] * 255); var BNumber1 = MainPanel.add ("statictext", undefined, BSlider1.value); BNumber1.preferredSize.width = 25; BNumber1.graphics.foregroundColor = BNumber1.graphics.newPen (BNumber1.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber1.graphics.disabledForegroundColor = BNumber1.graphics.foregroundColor; BNumber1.graphics.font = ScriptUI.newFont (BNumber1.graphics.font.name, "Bold", BNumber1.graphics.font.size); RSlider1.onChanging = GSlider1.onChanging = BSlider1.onChanging = function () { RNumber1.text = Math.round (RSlider1.value); GNumber1.text = Math.round (GSlider1.value); BNumber1.text = Math.round (BSlider1.value); Red1 = Math.floor ((Math.round (RSlider1.value) / 255) * 100) / 100; Green1 = Math.floor ((Math.round (GSlider1.value) / 255) * 100) / 100; Blue1 = Math.floor ((Math.round (BSlider1.value) / 255) * 100) / 100; } var Text2 = MainPanel.add ("statictext"); Text2.text = "Second color"; var RSlider2 = MainPanel.add ("slider"); RSlider2.minvalue = 0; RSlider2.maxvalue = 255; RSlider2.value = Math.round (SecondColor[0] * 255); var RNumber2 = MainPanel.add ("statictext", undefined, RSlider2.value); RNumber2.preferredSize.width = 25; RNumber2.graphics.foregroundColor = RNumber2.graphics.newPen (RNumber2.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1); RNumber2.graphics.disabledForegroundColor = RNumber2.graphics.foregroundColor; RNumber2.graphics.font = ScriptUI.newFont (RNumber2.graphics.font.name, "Bold", RNumber2.graphics.font.size); var GSlider2 = MainPanel.add ("slider"); GSlider2.minvalue = 0; GSlider2.maxvalue = 255; GSlider2.value = Math.round (SecondColor[1] * 255); var GNumber2 = MainPanel.add ("statictext", undefined, GSlider2.value); GNumber2.preferredSize.width = 25; GNumber2.graphics.foregroundColor = GNumber2.graphics.newPen (GNumber2.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1); GNumber2.graphics.disabledForegroundColor = GNumber2.graphics.foregroundColor; GNumber2.graphics.font = ScriptUI.newFont (GNumber2.graphics.font.name, "Bold", GNumber2.graphics.font.size); var BSlider2 = MainPanel.add ("slider"); BSlider2.minvalue = 0; BSlider2.maxvalue = 255; BSlider2.value = Math.round (SecondColor[2] * 255); var BNumber2 = MainPanel.add ("statictext", undefined, BSlider2.value); BNumber2.preferredSize.width = 25; BNumber2.graphics.foregroundColor = BNumber2.graphics.newPen (BNumber2.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1); BNumber2.graphics.disabledForegroundColor = BNumber2.graphics.foregroundColor; BNumber2.graphics.font = ScriptUI.newFont (BNumber2.graphics.font.name, "Bold", BNumber2.graphics.font.size); RSlider2.onChanging = GSlider2.onChanging = BSlider2.onChanging = function () { RNumber2.text = Math.round (RSlider2.value); GNumber2.text = Math.round (GSlider2.value); BNumber2.text = Math.round (BSlider2.value); Red2 = Math.floor ((Math.round (RSlider2.value) / 255) * 100) / 100; Green2 = Math.floor ((Math.round (GSlider2.value) / 255) * 100) / 100; Blue2 = Math.floor ((Math.round (BSlider2.value) / 255) * 100) / 100; } var Apply = MainPanel.add ("button", undefined, "Apply changes"); Apply.onClick = function () { var JSX = File ($.fileName); JSX.open ("r"); var JSXXX = JSX.read (); JSX.close (); JSX.open ("w"); // This part is not clear to me. What should I replace the colors with? JSXXX = JSXXX.replace (Red1, FirstColor [0]); JSXXX = JSXXX.replace (Green1, FirstColor [1]); JSXXX = JSXXX.replace (Blue1, FirstColor [2]); JSXXX = JSXXX.replace (Red2, SecondColor [0]); JSXXX = JSXXX.replace (Green2, SecondColor [1]); JSXXX = JSXXX.replace (Blue2, SecondColor [2]); JSX.write(JSXXX); JSX.close (); MainPanel.close (); } alert ("Color 1 : " + Red1 + " - " + Green1 + " - " + Blue1); alert ("Color 2 : " + Red2 + " - " + Green2 + " - " + Blue2); MainPanel.show (); The dialog will be included in the final script and will be used to change the color of the dialog itself.
[ "Not quite sure what you are after (see notes above). However, this will load in a second jsx file and change its value. Just not the same one as it's running.\nYour script fails because you don't update the RGB value\nvar RGBColor2 = RGBColor;\n\nSecondly, and I'm nor sure why, but the string replace wasn't working. A regular expression will replace the text however.\nvar RGBColor = [0, 0, 0]; //Black RGB color\nvar RGBColor2 = [0.2, 0.55, 0.2];\n\n// The script runs and change RGBColor2 value to [0.2, 0.55, 0.2]\nvar UpdateFile = function () \n{ \n var myFile = \"C:\\\\temp\\\\my_temp.jsx\"\n var JSX = File (myFile);\n JSX.open (\"e\"); //replaces contents\n var r = JSX.read ();\n var regEx = new RegExp(/\\[0, 0, 0\\]/gmi);\n r = r.replace(regEx, RGBColor2);\n JSX.write(r);\n JSX.close ();\n}\n\nUpdateFile ();\n\n", "As Ghoul Fool suggested to me, I tried to post a question on community.adobe.com, but to date the \"Actions and Scripting\" section seems not very popular.\nIn the end I was unable to do what I wanted, so I opted for the creation of an external XML file that the script can read and write.\nHere is the code:\n var XMLFile = File (Folder.desktop + \"/config.xml\");\n if (!XMLFile.exists) {\n Create ();\n }\n XMLFile.open (\"r\");\n var XMLObj = XML (XMLFile.read ());\n XMLFile.close ();\n var Red1 = XMLObj[\"firstred\"];\n var Green1 = XMLObj[\"firstgreen\"];\n var Blue1 = XMLObj[\"firstrblue\"];\n var Red2 = XMLObj[\"secondred\"];\n var Green2 = XMLObj[\"secondgreen\"];\n var Blue2 = XMLObj[\"secondblue\"];\n var FirstColor = [Number (Red1), Number (Green1), Number (Blue1)];\n var SecondColor = [Number (Red2), Number (Green2), Number (Blue2)];\n var MainPanel = new Window (\"dialog\", \"Panel1\");\n var Text1 = MainPanel.add (\"statictext\");\n Text1.text = \"First color\";\n var RSlider1 = MainPanel.add (\"slider\");\n RSlider1.minvalue = 0;\n RSlider1.maxvalue = 255;\n RSlider1.value = Math.round (Red1 * 255);\n var RNumber1 = MainPanel.add (\"statictext\", undefined, RSlider1.value);\n RNumber1.preferredSize.width = 25;\n RNumber1.graphics.foregroundColor = RNumber1.graphics.newPen (RNumber1.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1);\n RNumber1.graphics.disabledForegroundColor = RNumber1.graphics.foregroundColor;\n RNumber1.graphics.font = ScriptUI.newFont (RNumber1.graphics.font.name, \"Bold\", RNumber1.graphics.font.size);\n var GSlider1 = MainPanel.add (\"slider\");\n GSlider1.minvalue = 0;\n GSlider1.maxvalue = 255;\n GSlider1.value = Math.round (Green1 * 255);\n var GNumber1 = MainPanel.add (\"statictext\", undefined, GSlider1.value);\n GNumber1.preferredSize.width = 25;\n GNumber1.graphics.foregroundColor = GNumber1.graphics.newPen (GNumber1.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1);\n GNumber1.graphics.disabledForegroundColor = GNumber1.graphics.foregroundColor;\n GNumber1.graphics.font = ScriptUI.newFont (GNumber1.graphics.font.name, \"Bold\", GNumber1.graphics.font.size);\n var BSlider1 = MainPanel.add (\"slider\");\n BSlider1.minvalue = 0;\n BSlider1.maxvalue = 255;\n BSlider1.value = Math.round (Blue1 * 255);\n var BNumber1 = MainPanel.add (\"statictext\", undefined, BSlider1.value);\n BNumber1.preferredSize.width = 25;\n BNumber1.graphics.foregroundColor = BNumber1.graphics.newPen (BNumber1.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1);\n BNumber1.graphics.disabledForegroundColor = BNumber1.graphics.foregroundColor;\n BNumber1.graphics.font = ScriptUI.newFont (BNumber1.graphics.font.name, \"Bold\", BNumber1.graphics.font.size);\n RSlider1.onChanging = GSlider1.onChanging = BSlider1.onChanging = function () {\n RNumber1.text = Math.round (RSlider1.value);\n GNumber1.text = Math.round (GSlider1.value);\n BNumber1.text = Math.round (BSlider1.value);\n Red1 = Math.floor ((RSlider1.value / 255) * 100) / 100;\n Green1 = Math.floor ((GSlider1.value / 255) * 100) / 100;\n Blue1 = Math.floor ((BSlider1.value / 255) * 100) / 100;\n }\n var Text2 = MainPanel.add (\"statictext\");\n Text2.text = \"Second color\";\n var RSlider2 = MainPanel.add (\"slider\");\n RSlider2.minvalue = 0;\n RSlider2.maxvalue = 255;\n RSlider2.value = Math.round (Red2 * 255);\n var RNumber2 = MainPanel.add (\"statictext\", undefined, RSlider2.value);\n RNumber2.preferredSize.width = 25;\n RNumber2.graphics.foregroundColor = RNumber2.graphics.newPen (RNumber2.graphics.PenType.SOLID_COLOR, [1, 0, 0], 1);\n RNumber2.graphics.disabledForegroundColor = RNumber2.graphics.foregroundColor;\n RNumber2.graphics.font = ScriptUI.newFont (RNumber2.graphics.font.name, \"Bold\", RNumber2.graphics.font.size);\n var GSlider2 = MainPanel.add (\"slider\");\n GSlider2.minvalue = 0;\n GSlider2.maxvalue = 255;\n GSlider2.value = Math.round (Green2 * 255);\n var GNumber2 = MainPanel.add (\"statictext\", undefined, GSlider2.value);\n GNumber2.preferredSize.width = 25;\n GNumber2.graphics.foregroundColor = GNumber2.graphics.newPen (GNumber2.graphics.PenType.SOLID_COLOR, [0, 1, 0], 1);\n GNumber2.graphics.disabledForegroundColor = GNumber2.graphics.foregroundColor;\n GNumber2.graphics.font = ScriptUI.newFont (GNumber2.graphics.font.name, \"Bold\", GNumber2.graphics.font.size);\n var BSlider2 = MainPanel.add (\"slider\");\n BSlider2.minvalue = 0;\n BSlider2.maxvalue = 255;\n BSlider2.value = Math.round (Blue2 * 255);\n var BNumber2 = MainPanel.add (\"statictext\", undefined, BSlider2.value);\n BNumber2.preferredSize.width = 25;\n BNumber2.graphics.foregroundColor = BNumber2.graphics.newPen (BNumber2.graphics.PenType.SOLID_COLOR, [0, 0, 1], 1);\n BNumber2.graphics.disabledForegroundColor = BNumber2.graphics.foregroundColor;\n BNumber2.graphics.font = ScriptUI.newFont (BNumber2.graphics.font.name, \"Bold\", BNumber2.graphics.font.size);\n RSlider2.onChanging = GSlider2.onChanging = BSlider2.onChanging = function () {\n RNumber2.text = Math.round (RSlider2.value);\n GNumber2.text = Math.round (GSlider2.value);\n BNumber2.text = Math.round (BSlider2.value);\n Red2 = Math.floor ((RSlider2.value / 255) * 100) / 100;\n Green2 = Math.floor ((GSlider2.value / 255) * 100) / 100;\n Blue2 = Math.floor ((BSlider2.value / 255) * 100) / 100;\n }\n var Apply = MainPanel.add (\"button\", undefined, \"Apply changes\");\n Apply.onClick = function () {\n var XML = new File (Folder.desktop + \"/config.xml\");\n XML.open ('w');\n XML.writeln ('<?xml version=\"1.0\" encoding=\"utf-8\"?>');\n XML.writeln (\"<variables>\");\n XML.writeln (\" <firstred>\" + Red1 + \"</firstred>\");\n XML.writeln (\" <firstgreen>\" + Green1 + \"</firstgreen>\");\n XML.writeln (\" <firstrblue>\" + Blue1 + \"</firstrblue>\");\n XML.writeln (\" <secondred>\" + Red2 + \"</secondred>\");\n XML.writeln (\" <secondgreen>\" + Green2 + \"</secondgreen>\");\n XML.writeln (\" <secondblue>\" + Blue2 + \"</secondblue>\");\n XML.writeln (\"</variables>\");\n XML.close ();\n MainPanel.close ();\n }\n alert (\"Color 1 : \" + Red1 + \" - \" + Green1 + \" - \" + Blue1);\n alert (\"Color 2 : \" + Red2 + \" - \" + Green2 + \" - \" + Blue2);\n MainPanel.show ();\n function Create () {\n var XML = new File (Folder.desktop + \"/config.xml\");\n XML.open ('w');\n XML.writeln ('<?xml version=\"1.0\" encoding=\"utf-8\"?>');\n XML.writeln (\"<variables>\");\n XML.writeln (\" <firstred>0</firstred>\");\n XML.writeln (\" <firstgreen>0</firstgreen>\");\n XML.writeln (\" <firstrblue>0</firstrblue>\");\n XML.writeln (\" <secondred>1</secondred>\");\n XML.writeln (\" <secondgreen>1</secondgreen>\");\n XML.writeln (\" <secondblue>1</secondblue>\");\n XML.writeln (\"</variables>\");\n XML.close ();\n }\n\n" ]
[ 1, 0 ]
[]
[]
[ "adobe_illustrator", "adobe_indesign", "extendscript", "javascript", "photoshop_script" ]
stackoverflow_0074291936_adobe_illustrator_adobe_indesign_extendscript_javascript_photoshop_script.txt
Q: Unreal Engine Failed to initialize ShaderCodeLibrary EDIT1: This problem also apears on windows so I think it is a general problem with unreal engine 5 and visual studio code. I am trying to debug a unreal engine 5 project. When running DebugGame I get the error message: Failed to initialize ShaderCodeLibrary required by the project because part of the Global shader library is missing I am using: Linux, wayland, NVIDIA GeForce RTX 2060, Visual Studio Code More Logs: [2022.08.24-09.17.36:210][ 0]LogVulkanRHI: Display: Using Device 0: Geometry 1 BufferAtomic64 1 ImageAtomic64 1 [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Found 3 Queue Families [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Initializing Queue Family 0: 16 queues Gfx Compute Xfer Sparse [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Initializing Queue Family 1: 2 queues Xfer Sparse [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Skipping unnecessary Queue Family 2: 8 queues Compute Xfer Sparse [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Using device layers [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Using device extensions [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_driver_properties [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_get_memory_requirements2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_dedicated_allocation [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_create_renderpass2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_fragment_shading_rate [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_swapchain [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_maintenance1 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_maintenance2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_memory_budget [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_shader_atomic_int64 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_shader_image_atomic_int64 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_image_format_list [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_shader_viewport_index_layer [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Max memory allocations -1. [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 3 Device Memory Heaps: [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 0: Flags 0x1 - Size 6442450944 (6144.00 MB) - Used 0 (0.00%) - DeviceLocal - PrimaryHeap [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 1: Flags 0x0 - Size 12117417984 (11556.07 MB) - Used 0 (0.00%) [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 2: Flags 0x1 - Size 257949696 (246.00 MB) - Used 0 (0.00%) - DeviceLocal [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 5 Device Memory Types (Not unified): [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 1: Flags 0x00001 - Heap 0 - DeviceLocal [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 0: Flags 0x00000 - Heap 1 - [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 2: Flags 0x00006 - Heap 1 - HostVisible HostCoherent [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 3: Flags 0x0000e - Heap 1 - HostVisible HostCoherent HostCached [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 4: Flags 0x00007 - Heap 2 - DeviceLocal HostVisible HostCoherent [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Memory Budget Extension: [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: | Usage | Budget | Size | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: ---------|------------------------------------------------------------------| [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 00 | 0.00% / 0.19 MB | 3432.00 MB | 6144.00 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 01 | 0.17% / 19.27 MB | 11556.07 MB | 11556.07 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 02 | 18.60% / 45.75 MB | 200.25 MB | 246.00 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: ---------|------------------------------------------------------------------| [2022.08.24-09.17.36:622][ 0]LogVulkanRHI: Display: FVulkanPipelineStateCacheManager: Binary pipeline cache '../../../../../../../opt/unreal-engine/../../home/walde/Documents/Unreal Projects/MyProject9/Saved/VulkanPSO.cache.10de.1f15' not found. [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: Adapter Name: NVIDIA GeForce RTX 2060 [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: API Version: 1.3.205 [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: Driver Version: 515.65 [2022.08.24-09.17.36:635][ 0]LogRHI: Texture pool is 4473 MB (70% of 6390 MB) [2022.08.24-09.17.36:641][ 0]LogRendererCore: Ray tracing is disabled. Reason: r.RayTracing=0. [2022.08.24-09.17.36:641][ 0]LogShaderLibrary: Display: Running without a pakfile and did not find a monolithic library 'Global' - attempting disk search for its chunks [2022.08.24-09.17.36:642][ 0]LogShaderLibrary: Display: .... not found [2022.08.24-09.17.36:643][ 0]LogShaderLibrary: Error: Failed to initialize ShaderCodeLibrary required by the project because part of the Global shader library is missing from ../../../../../../../opt/unreal-engine/../../home/walde/Documents/Unreal Projects/MyProject9/Content/. A: When you run DebugGame it will go and look for files in your binary folders to run your game. If you've not gone to File > Package project, you are likely not to have the required files. The package process creates these files. Be sure that you go into Package settings and select the appropriate target (for example, if you're trying to launch Debug Game, then you should change it from the default of Shipping to Debug Game)
Unreal Engine Failed to initialize ShaderCodeLibrary
EDIT1: This problem also apears on windows so I think it is a general problem with unreal engine 5 and visual studio code. I am trying to debug a unreal engine 5 project. When running DebugGame I get the error message: Failed to initialize ShaderCodeLibrary required by the project because part of the Global shader library is missing I am using: Linux, wayland, NVIDIA GeForce RTX 2060, Visual Studio Code More Logs: [2022.08.24-09.17.36:210][ 0]LogVulkanRHI: Display: Using Device 0: Geometry 1 BufferAtomic64 1 ImageAtomic64 1 [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Found 3 Queue Families [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Initializing Queue Family 0: 16 queues Gfx Compute Xfer Sparse [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Initializing Queue Family 1: 2 queues Xfer Sparse [2022.08.24-09.17.36:211][ 0]LogVulkanRHI: Display: Skipping unnecessary Queue Family 2: 8 queues Compute Xfer Sparse [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Using device layers [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Using device extensions [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_driver_properties [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_get_memory_requirements2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_dedicated_allocation [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_create_renderpass2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_fragment_shading_rate [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_swapchain [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_maintenance1 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_maintenance2 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_memory_budget [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_shader_atomic_int64 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_shader_image_atomic_int64 [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_KHR_image_format_list [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: * VK_EXT_shader_viewport_index_layer [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Max memory allocations -1. [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 3 Device Memory Heaps: [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 0: Flags 0x1 - Size 6442450944 (6144.00 MB) - Used 0 (0.00%) - DeviceLocal - PrimaryHeap [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 1: Flags 0x0 - Size 12117417984 (11556.07 MB) - Used 0 (0.00%) [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 2: Flags 0x1 - Size 257949696 (246.00 MB) - Used 0 (0.00%) - DeviceLocal [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 5 Device Memory Types (Not unified): [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 1: Flags 0x00001 - Heap 0 - DeviceLocal [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 0: Flags 0x00000 - Heap 1 - [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 2: Flags 0x00006 - Heap 1 - HostVisible HostCoherent [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 3: Flags 0x0000e - Heap 1 - HostVisible HostCoherent HostCached [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: 4: Flags 0x00007 - Heap 2 - DeviceLocal HostVisible HostCoherent [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: Memory Budget Extension: [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: | Usage | Budget | Size | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: ---------|------------------------------------------------------------------| [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 00 | 0.00% / 0.19 MB | 3432.00 MB | 6144.00 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 01 | 0.17% / 19.27 MB | 11556.07 MB | 11556.07 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: HEAP 02 | 18.60% / 45.75 MB | 200.25 MB | 246.00 MB | [2022.08.24-09.17.36:586][ 0]LogVulkanRHI: Display: ---------|------------------------------------------------------------------| [2022.08.24-09.17.36:622][ 0]LogVulkanRHI: Display: FVulkanPipelineStateCacheManager: Binary pipeline cache '../../../../../../../opt/unreal-engine/../../home/walde/Documents/Unreal Projects/MyProject9/Saved/VulkanPSO.cache.10de.1f15' not found. [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: Adapter Name: NVIDIA GeForce RTX 2060 [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: API Version: 1.3.205 [2022.08.24-09.17.36:623][ 0]LogVulkanRHI: Driver Version: 515.65 [2022.08.24-09.17.36:635][ 0]LogRHI: Texture pool is 4473 MB (70% of 6390 MB) [2022.08.24-09.17.36:641][ 0]LogRendererCore: Ray tracing is disabled. Reason: r.RayTracing=0. [2022.08.24-09.17.36:641][ 0]LogShaderLibrary: Display: Running without a pakfile and did not find a monolithic library 'Global' - attempting disk search for its chunks [2022.08.24-09.17.36:642][ 0]LogShaderLibrary: Display: .... not found [2022.08.24-09.17.36:643][ 0]LogShaderLibrary: Error: Failed to initialize ShaderCodeLibrary required by the project because part of the Global shader library is missing from ../../../../../../../opt/unreal-engine/../../home/walde/Documents/Unreal Projects/MyProject9/Content/.
[ "When you run DebugGame it will go and look for files in your binary folders to run your game. If you've not gone to File > Package project, you are likely not to have the required files. The package process creates these files. Be sure that you go into Package settings and select the appropriate target (for example, if you're trying to launch Debug Game, then you should change it from the default of Shipping to Debug Game)\n" ]
[ 0 ]
[]
[]
[ "nvidia", "shader", "unreal_engine5", "visual_studio_code", "vulkan" ]
stackoverflow_0073471413_nvidia_shader_unreal_engine5_visual_studio_code_vulkan.txt