instruction
stringlengths
0
30k
We have a 4+ year android application that relies on the Realm DB. We are exploring the possibility to adopt the Room Database going forward. There are a few challenges and queries that need to be addressed, need help regarding the same. Mentioning the same below. 1. Is there any library/plugin that can create a Room DB from the Realm DB 2. If the above proves possible, can the data in the previous versions of the app be migrated to the Room DB 3. Has this task been ever tried by any team, is so what were the learnings and pit falls that we might encounter if we decide to go forward with this migration. PS : The following post talks about the DB migration posted in 2018, checking if there has been any solution since then. https://stackoverflow.com/questions/53911486/android-migrate-data-from-realm-to-room
I'm trying to work with database SQL Lite on react native using SQL lite data base and expo library expo-sqlite. - The package is well installed with **npx expo install expo-sqlite** - I have already removed node_modules, package-lock.json, pods, Bulds (from IOS folder) - Reinstalled with **npm i** - Reinstalled pods with **pod install** - Followed a tutorial on https://docs.expo.dev/versions/latest/sdk/sqlite/ But I have an error *"Error: Cannot find native module 'ExpoSQLite', js engine: hermes"* So when showing app, the results is ERROR screen on emulator: [![enter image description here](https://i.stack.imgur.com/rIbjh.png)](https://i.stack.imgur.com/rIbjh.png) I have found the line where module is called but as I do understand, expo do not complies required files. const ExpoSQLite = requireNativeModule('ExpoSQLite'); from **SQLite.ts** de node_modules/expo-sqlite When comment a line of calls to a database, the error is gone and code is shown but there is no DB attached. Strange but there is no same error when compile with Android version Any help will be appreciated. Thanks a lot, Andrew
EXPO React-Native: expo-sqlite compile error on IOS TV 17.0 Emulator
|ios|react-native|expo|expo-sqlite|
null
|django|
I want to answer my question by an implementation of a custom `KeyboardService` that provides an Overlay `PopupWindow` to show the currently entered Text in the activated Android `Keyboard`. This is achieved via `GlobalLayoutListener` that checks if the `Keyboard` is activated and determines the `EditText` that currently has the focus. The Text of this `EditText` (Entry) is shown in the `Overlay` provided via `KeyboardOverlayService`: To achieve this, I added the following file `KeyboardService` in the Android Platform Folder: `...\Platforms\Android\ControlHandler\..` **IKeyboardOverlayService.cs**: ```cs namespace MyApp.Platforms.Android.ControlHandler { public interface IKeyboardOverlayService { void ShowOverlay(string text); void HideOverlay(); } } ``` **KeyboardOverlayService.cs**: ```cs #if ANDROID using Android.Util; using Android.Views; using Android.Widget; using Application = Microsoft.Maui.Controls.Application; using View = Android.Views.View; using Graphics = Android.Graphics; [assembly: Dependency(typeof(MyApp.Platforms.Android.ControlHandler.KeyboardOverlayService))] namespace MyApp.Platforms.Android.ControlHandler { public class KeyboardOverlayService : IKeyboardOverlayService { private PopupWindow overlayPopup; private TextView textView; private LayoutInflater inflater; private View view; public void ShowOverlay(string text) { var context = Application.Current?.Handler?.MauiContext?.Context; if (context == null) return; // Initialize inflater and View only once if (inflater == null || view == null) { inflater = LayoutInflater.From(context); view = inflater.Inflate(Resource.Layout.overlay_layout, null); textView = view.FindViewById<TextView>(Resource.Id.overlayTextView); } textView.Text = text; if (overlayPopup == null) { overlayPopup = new PopupWindow(context); overlayPopup.ContentView = view; overlayPopup.Width = 400; overlayPopup.Height = ViewGroup.LayoutParams.WrapContent; overlayPopup.SetBackgroundDrawable(new Graphics.Drawables.ColorDrawable(Graphics.Color.Transparent)); overlayPopup.OutsideTouchable = false; // Prevents that the Overlay is hidden when touch registered outside } var activity = Application.Current.Windows[0]?.Handler?.MauiContext?.Context?.GetActivity(); if (activity != null) { DisplayMetrics displayMetrics = new DisplayMetrics(); activity.WindowManager.DefaultDisplay.GetMetrics(displayMetrics); int screenWidth = displayMetrics.WidthPixels; int xPos = (screenWidth - overlayPopup.Width) / 2; // Center the Overlay int yPos = -50; // The higher the value the lower the Overlay should be displayed // Position the PopupWindow, only if not already displayed if (!overlayPopup.IsShowing) { overlayPopup.ShowAtLocation(activity.Window.DecorView.RootView, GravityFlags.Bottom | GravityFlags.Start, xPos, yPos); } } } public void HideOverlay() { overlayPopup?.Dismiss(); } } } #endif ``` **MainProgram.cs**: In the `MainProgram.cs` we can register this `IKeyboardOverlayService` to get this `Service` from other places: ```cs public static MauiApp CreateMauiApp() { var builder = MauiApp.CreateBuilder(); builder .UseMauiApp<App>() .ConfigureFonts(fonts => { fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular"); fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold"); }) ... #if ANDROID builder.Services.AddSingleton<IKeyboardOverlayService, KeyboardOverlayService>(); #endif ... return builder.Build(); } ``` **Layouting of the Overlay**: For the layout of the `Overlay` I added a `overlay_layout.xml` file with constants to layout the content of the `Overlay` (e.g. `textColor` and `background`) in the **Android Platform Resources Layout** folder: `..\Platforms\Android\Resources\layout\overlay_layout.xml` ```xaml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:background="#F0F0F3"> <TextView android:id="@+id/overlayTextView" android:layout_width="match_parent" android:layout_height="wrap_content" android:textColor="#1C1C1C" android:layout_gravity="center" android:gravity="center" android:textSize="18sp"/>/> </LinearLayout> ``` **MainActivity.cs**: In `..\Platforms\Android\MainActivity.cs` we can register a `GlobalLayoutListener` that listens to the `Keyboard` being opened and closed and shows/hides the `Overlay` correspondingly when the Android Keyboard is visible or disappears: ```cs using Android.App; using Android.Content; using Android.Content.PM; using Android.OS; using Android.Views; using Android.Widget; using MyApp.Platforms.Android.ControlHandler; namespace MyApp; //[Activity(Theme = "@style/Maui.SplashTheme", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation | ConfigChanges.UiMode | ConfigChanges.ScreenLayout | ConfigChanges.SmallestScreenSize | ConfigChanges.Density)] [Activity(Theme = "@style/CustomSplashTheme", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation | ConfigChanges.UiMode | ConfigChanges.ScreenLayout | ConfigChanges.SmallestScreenSize | ConfigChanges.Density)] public class MainActivity : MauiAppCompatActivity { private ViewTreeObserver.IOnGlobalLayoutListener _globalLayoutListener; private Android.Views.View _rootView; protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); _rootView = (ViewGroup)((ViewGroup)FindViewById(Android.Resource.Id.Content)).GetChildAt(0); _globalLayoutListener = new GlobalLayoutListener(_rootView, this); _rootView.ViewTreeObserver.AddOnGlobalLayoutListener(_globalLayoutListener); } private class GlobalLayoutListener : Java.Lang.Object, ViewTreeObserver.IOnGlobalLayoutListener { private readonly Context _context; private IKeyboardOverlayService _keyboardOverlayService; public GlobalLayoutListener(Android.Views.View view, Context context) { _context = context; _keyboardOverlayService = App.Current.Handler.MauiContext.Services.GetService<IKeyboardOverlayService>(); } public bool IsKeyboardVisible() { var context = _context; var activity = context.GetActivity(); var window = activity.Window; var windowInsets = window.DecorView.RootWindowInsets; var imeVisible = windowInsets.IsVisible(WindowInsets.Type.Ime()); return imeVisible; } public void OnGlobalLayout() { if (IsKeyboardVisible()) { // Get the Text of the EditText ("Entry") that is currently focused // and for which the currently visible Keyboard is opened var activity = _context.GetActivity(); var focusedView = activity.CurrentFocus; if (focusedView is EditText editText) { var text = editText.Text; _keyboardOverlayService.ShowOverlay(text); } } else { _keyboardOverlayService.HideOverlay(); } } } } ``` <br/> This will provide the `Overlay` below the Android Keyboard and looks as follows with the configuration from the `overlay_layout.xml` from above and can be changed to your needs: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/8uqFh.gif
{"Voters":[{"Id":2518285,"DisplayName":"Brett Donald"},{"Id":1620194,"DisplayName":"Salketer"},{"Id":8036670,"DisplayName":"assembler"}]}
Making a column nullable **is** making it optional. In PostgreSQL, a NULL value uses no extra storage space. If you don't want to insert a value in `col2`, just omit it from the column list: INSERT INTO tab (col1, col3) VALUES (1, 42);
|azure-devops|notifications|
The issue is simple: you're confusing the `datasets` library with PyTorch's `Dataset` class. Your import statement should target PyTorch's dataset-handling facilities, not another library. Ensure you're using `from torch.utils.data import Dataset` to get the correct base class for your custom dataset. The AttributeError you're facing is because the `datasets` library expects a different structure for dataset objects, including metadata attributes like `_info`, which are irrelevant to your use case. Stick to PyTorch's `Dataset` class, and you'll be set.
I generate rst files containing formulas in latex, then try to upload them to confluence Example: File main.rst consist the formulas ```rst :math:`\sqrt{3x-1}+(1+x)^2` ``` The conf file.py is located in the docs folder Its contents ```python extensions = [ 'sphinxcontrib.confluencebuilder', 'sphinxcontrib.katex' ] confluence_publish = True confluence_space_key = '~user' confluence_parent_page = 'Test rst generate' confluence_server_url='https://confluence.com' confluence_page_hierarchy = True confluence_page_generation_notice = True confluence_prev_next_buttons_location = 'top' confluence_server_user='username' confluence_server_pass='password' ``` I run it with the command ```bash sphinx-build -b confluence docs _build/confluence ``` output: ``` Running Sphinx v7.2.6 WARNING: normalizing confluence url from https://confluence.com/ building [mo]: targets for 0 po files that are out of date writing output... building [confluence]: targets for 2 source files that are out of date reading sources... [100%] main looking for now-outdated files... none found checking consistency... done preparing documents... WARNING: title conflict detected with 'index' and 'main' WARNING: LaTeX command 'latex' cannot be run (needed for math display), check the imgmath_latex setting done copying assets... done writing output... [100%] main WARNING: ignoring node since no latex macro configured WARNING: ignoring node since no latex macro configured publishing documents... [100%] main building intersphinx... done publishing assets... [100%] objects.inv Publish point: https://confluence.com/viewpage.action?pageId=209333449 build succeeded, 5 warnings. ``` As a result, the formula is not displayed
Hi I want to group elastic search query result bases on a specific field. I have gone through collapse and aggregation doc but cant find how to achieve this. Here is an example Lets say I have three documents:- { "id":"1" "sub":"1_1a" "name" : "TV", "make" : "samsung", "model":"plasma"} { "id":"2" "subid":"1_2a" "name" : "TV", "make" : "samsung", "model":"plasma_different_model"} { "id":"3" "subid":"1_3a" "name" : "TV", "make" : "samsung", "model":"plasma_another_different_model"} I want to group my query result by subid with only 1st part of sub-id="1" splitting it from the underscore. So my aim is to get only one doc as search result with "sub-id"="1" according to relevance. How can i achieve this?
Elastic Search grouping search results based on a field
|elasticsearch|kibana|elastic-stack|opensearch|
There are a few problems with your code. This is one to consider: ``` .Range("A:A" & (fVal.Row)).Copy ``` If fVal.Row is row 27 ... this makes A:A27. That's not what you want. I think you want: ``` .Range("A" & (fVal.Row) & ":A" & (someVariableForEndOfRange)).Copy ``` If you don't want to explicitly include the row of the flag/found text, add one to fVal.Row before your string concatenation. you will need to explicitly identify the end of the range - VBA doesn't seem to like it if you end the range object with only the column. After that, your code will still throw some scoping errors. The With statement doesn't quite work as you intended, but I *think* you can just do something like this. I have not sorted the "end of range" variable - that is left for you to search/sort. ``` Sub Macro11() Dim strSearch As String Dim fVal As Range Dim lastrow As Long 'Dim wk As Workbook Dim SelCp As Range 'Set the value you want to search strSearch = "*Question*" 'Find string on column A Set fVal = Sheets("Sheet1").Columns("A:A").Find(strSearch, LookIn:=xlValues, lookat:=xlWhole) If fVal Is Nothing Then 'Not found MsgBox "Not Found" Else 'Found MsgBox "Found at: " & fVal.Address & ", created address " & ((fVal.Address) & ":A") Sheets("Sheet1").Range((fVal.Address) & ":$A$99").Copy Sheets("Sheet2").Activate Sheets("Sheet2").Range("B1").PasteSpecial xlPasteValues End If End Sub ```
|php|laravel|laravel-5|cors|
{"Voters":[{"Id":5373689,"DisplayName":"Denton Thomas"}]}
cucumber.api.cli.Main run WARNING: You are using deprecated Main class. Please use io.cucumber.core.cli.Main 0 Scenarios 0 Steps 0m0.014s I'm not getting snippets for my steps. and my login feature file Feature: Login Scenario: Successful Login with valid credentials Given User Launch Chrome browser When User Opens URL "http://admin-demo.nopcommerce.com/login" And User enters email as "admin@yourstore.com" and Password "admin" And Click on Login Then Page Title Should be "Dashboard / administration" When User Click on Log out Link Then Page Title should be "Your store. Login" And Close browser I have added all necessary dependencies in pom.xml like cucumber-core cucumber-html cobertura cucumber-java cucumber-junit cucumber-jvm-deps cucumber-reporting hamcrest-core gherkin selenium-Java Junit
Replace Realm DB in the an existing Android app with Room DB
|android|realm|android-room|realm-mobile-platform|
In VScode, I was using INT_MIN/INT_MAX, But today I got an error, saying "Unidentified Classifier". Instead it suggested me to use INT8_MIN. After using this it worked perfectly. But, what is the core difference between them..??
I encountered an error while trying to run my Flutter application on the iPhone Simulator. Here's the error message: ```bash Unable to install /Users/enkh-amgalan/Desktop/lesson/mobile/week-1/add_sub_app/build/ios/iphonesimulator/Runner.app on 524B6B86-9413-48BF-9408-288F75BDC9F4. This is sometimes caused by a malformed plist file: ProcessException: Process exited abnormally with exit code 149: An error was encountered processing the command (domain=com.apple.CoreSimulator.SimError, code=405): Failed to find service com.apple.CoreSimulator.host_support in session com.apple.CoreSimulator.SimDevice.524B6B86-9413-48BF-9408-288F75BDC9F4 Underlying error (domain=NSPOSIXErrorDomain, code=3): The operation couldn’t be completed. No such process No such process Command: /usr/bin/arch -arm64e xcrun simctl install 524B6B86-9413-48BF-9408-288F75BDC9F4 /Users/enkh-amgalan/Desktop/lesson/mobile/week-1/add_sub_app/build/ios/iphonesimulator/Runner.app Error launching application on iPhone 15 Pro. ``` I attempted to run my Flutter application on the iPhone Simulator by using the flutter run command. I expected the application to build successfully and launch on the simulator. However, I encountered the error mentioned above instead.
Flutter: Unable to install Runner.app on iPhone Simulator - Error 149
|flutter|flutter-build|flutter-ios-build|
null
I am try to testing out version check and other things with Rust `std::process::command`. Other command are working fine but when I try to call `npm -v`, I got program not found error. 1. [My Error](https://i.stack.imgur.com/i3hpi.png) 1. [My Code](https://i.stack.imgur.com/JvCvH.png) 1. [NPM Test](https://i.stack.imgur.com/OmRRo.png) I want to `npm -v` command runnable with rust
Why I get program not found error on running "npm -v" command with Rust Command::new("npm")?
|npm|rust|command|
null
I downloaded Qt Creator and I want to set the [Dracula theme][1] for it. It is instructed that third party themes are to be put in `$HOME\.config\QtProject\qtcreator\styles` (Windows). Fine, but after I've copied the file into that folder, the theme is not showing up in the theme list, when I'm trying to set a new theme in Qt Creator, selecting *Tools > Options > Text Editor > Theme-button*. I'm quite (90%) sure the path/location is correct, but I assume the themes have moved to somewhere else. How do I set up this theme, instead of the default dark-theme? [1]: https://draculatheme.com/qtcreator/
Applying third-party themes to Qt Creator
The problem with publish LaTex formulas in confluence
|python|latex|sphinx|confluence|
While preprocessing a dataset I'm facing ``` DF ID Age Gender Height Weight BMI Label 0 1 25 Male 175.0 80 25.3 Normal Weight 1 2 30 Female 160.0 60 22.5 Normal Weight 2 3 35 Male 180.0 90 27.3 Overweight 3 4 40 Female 150.0 50 20.0 Underweight 4 5 45 Male 190.0 100 31.2 Obese ... ... ... ... ... ... ... ... 103 106 11 Male 175.0 10 3.9 Underweight 104 107 16 Female 160.0 10 3.9 Underweight 105 108 21 Male 180.0 15 5.6 Underweight 106 109 26 Female 150.0 15 5.6 Underweight 107 110 31 Male 190.0 20 8.3 Underweight DF.isnull().sum() DF['Height']=DF['Height'].fillna(DF['Height'].median()) most_frequent_category = DF['Gender'].mode().iloc[0] DF['Gender'].fillna(most_frequent_category, inplace = True) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() DF['Gender']=le.fit_transform(DF['Gender']) DF['Label']=le.fit_transform(DF['Label']) value_count = DF.groupby('Label').size().reset_index(name = 'count') print(value_count) z_score = (DF['Age']-DF['Age'].mean())/DF['Age'].std() print(z_score) for i in z_score: if i<-3: print ("We have Outlier", i) elif i>3: print ("We have Outlier", i) else: continue import matplotlib.pyplot as plt import seaborn as sns sns.set(font_scale = 2) #fixing the font size of the visualization to 2 plt.subplots (figsize = (15,15)) heat_plot = sns.heatmap(DF.corr(method='pearson'), annot=True, cmap='Spectral',annot_kws={'size':20}) plt.yticks(fontsize = 15) plt.xticks(fontsize = 15) plt.show() correlations = DF.corr(method='pearson') print(correlations['Label'].sort_values(ascending = False).to_string()) DF['Label'].iloc[0:150] from sklearn.utils import shuffle shuffled_DF = shuffle(DF) rearranged_DF = shuffled_DF.reset_index(drop = True) X = rearranged_DF.drop(columns=['Label']) y = rearranged_DF['Label'] from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler scaler = StandardScaler() Standard_Scaled_X = scaler.fit_transform(X) print(Standard_Scaled_X) from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(Standard_Scaled_X,y, test_size = 0.20) print("train data size (features): ", len(x_train)) print("train data size (target): ", len(y_train)) print("test data size (features): ", len(x_test)) print("test data size (target): ", len(y_test)) print(x_train) from sklearn.naive_bayes import GaussianNB NB_Classifier = GaussianNB(priors=None, var_smoothing=1e-09) from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold k_fold = KFold(10) accuracy = cross_val_score(NB_Classifier, x_train,y_train, cv = k_fold, scoring = 'accuracy') precision = cross_val_score(NB_Classifier, x_train,y_train, cv = k_fold, scoring = 'precision') recall = cross_val_score(NB_Classifier, x_train,y_train, cv = k_fold, scoring = 'recall') f1_score = cross_val_score(NB_Classifier, x_train,y_train, cv = k_fold, scoring = 'f1') AUC = cross_val_score(NB_Classifier, x_train,y_train, cv = k_fold, scoring = 'roc_auc') ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. print(accuracy) overall_accuracy = sum(accuracy)/len(accuracy) print(overall_accuracy) print(precision) overall_precision = sum(precision)/len(precision) print(overall_precision) output: nan ``` I had to add the whole code here. Accuracy value is giving me right output, but precision/recall/f1_score/AUC values are giving me nan as output. How can I resolve it?
The issue was the result from the recursive did not pass back becuase of the $t register. I expanded the stack, and saved the result into the stack, and load it back and get the final value. ``` catalan_recur: addi $sp, $sp, -20 # Allocate space on stack for 5 items: $ra, $s0, $s1, $a0, and loop counter sw $ra, 0($sp) # Save return address sw $s0, 4($sp) # Save $s0 (we will use it for temporary storage) sw $s1, 8($sp) # Save $s1 (used for accumulating result) sw $a0, 12($sp) # Save original argument $a0 (n) sw $t2, 16($sp) # Save $t2 if used before calling this function li $t0, 1 # Load immediate 1 into $t0 for comparison li $s1, 0 # Initialize accumulator for result in $s1 ble $a0, $t0, return_one # If n <= 1, return 1 as Catalan number # Initialize loop counter, $t2 = 0 li $t2, 0 loop_start: blt $t2, $a0, recursive_part # If $t2 < n, continue recursion j end_loop # Else, end loop recursive_part: move $a0, $t2 # Prepare first argument for recursive call jal catalan_recur # Recursive call catalan_recur(i) move $s0, $v0 # Store the result of catalan_recur(i) in $s0 lw $a0, 12($sp) # Restore $a0 (n) for calculation of n-i-1 sub $a1, $a0, $t2 # n - i addi $a1, $a1, -1 # n - i - 1 move $a0, $a1 # Prepare second argument for recursive call jal catalan_recur # Recursive call catalan_recur(n-i-1) mul $s0, $s0, $v0 # Multiply results: catalan_recur(i) * catalan_recur(n-i-1) add $s1, $s1, $s0 # Accumulate in $s1 lw $a0, 12($sp) # Restore $a0 for next iteration addi $t2, $t2, 1 # Increment loop counter j loop_start # Jump back to start of loop end_loop: move $v0, $s1 # Move accumulated result to $v0 j cleanup # Jump to cleanup return_one: li $v0, 1 # Set return value to 1 for base case cleanup: lw $ra, 0($sp) # Restore $ra lw $s0, 4($sp) # Restore $s0 lw $s1, 8($sp) # Restore $s1 lw $a0, 12($sp) # Restore $a0 lw $t2, 16($sp) # Restore $t2 addi $sp, $sp, 20 # Deallocate stack space jr $ra # Return ```
In Java Memory is divided into 3 parts: 1. Method Area 2. Heap 3. Stack ---------------- 1. Method Area is memory where class is loaded and along with that static variables and constants are defined. 2. Stack is memory area where a method is loaded and its execution take place. All Local variables are stored in these. 3. Heap is that memory where objects are created , I mean where instance variables are created under object name.
`ReferenceError: a is not defined` means that the identifier `a` isn't declared *at all* in the current scope. That's clearly not the case in the code you've shown, suggesting either that the error isn't from the code shown (prehaps a previous run?) or that the code is being executed in an unusual way, such as in a REPL like the browser console. REPLs often execute the code in a slightly different way than it would be if it were run as part of a program, so it's generally best not to worry too much about this kind of difference between a REPL and non-REPL environment. Your screenshot doesn't look like a REPL, though. Here's an example of "is not defined": <!-- begin snippet: js hide: true console: true babel: false --> <!-- language: lang-js --> console.log(a); <!-- end snippet --> `ReferenceError: Cannot access 'a' before initialization`¹ means that `a` **is** declared in the scope, but you tried to use it before it was initialized, when it was in the [Temporal Dead Zone][1]. That's the correct error for the code shown when executed in the normal way (not in a REPL), because `let a` exists in the same scope as the `console.log(a)` above it, and so the binding (variable) is created but not initialized before the `console.log` statement attempts to access it. Here's your code at global scope, making it easy for people to see that this is the correct error: <!-- begin snippet: js hide: true console: true babel: false --> <!-- language: lang-js --> console.log(a); let a=10; var b=19; <!-- end snippet --> ---- ¹ That's V8's error (Chromium browsers, Node.js, Deno, ...). SpiderMonkey (in Firefox and such) says `ReferenceError: can't access lexical declaration 'a' before initialization` instead. JavaScriptCore (Safari, Bun) says `ReferenceError: Cannot access uninitialized variable.` ---- Another posted answer suggests that the difference relates to using a block statement or not using one, in part because MDN's page on the error is a bit misleading. That's incorrect, no block statement is required. (I raised [an issue for it][2], and my [PR][3] has been merged; the page will be updated soon.) The specification's description of the relevant operation (`GetBindingValue`) of an environment record (in [this table][4]) makes no distinction between records created for blocks, specifically, vs. other kinds of environment records (global, modules, function, ...). To be sure, I've verified that the three major JavaScript engines (V8, SpiderMonkey, and JavaScriptCore) all behave as described above with the OP's code at global scope (no block statement required), giving the error about an uninitialized variable, not the "not defined" error. You can try a Chromium browser or Firefox on the snippet above to check for yourself. (You might be able to with Safari on a Mac as well. Unfortunately, iOS Safari just reports a script error. I have verified it by directly executing JavaScriptCore, since I don't have a Mac.) [1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/let#temporal_dead_zone_tdz [2]: https://github.com/mdn/content/issues/32630 [3]: https://github.com/mdn/content/pull/32631 [4]: https://tc39.es/ecma262/#table-abstract-methods-of-environment-records
|themes|qt-creator|
I am trying to code a doubly linked list by rust lang. The main two types defined as: ```rust struct IntList { head: *mut Node, tail: *mut Node, size: usize, } #[derive(Copy)] #[derive(Clone)] struct Node { data: i32, prev: *mut Node, next: *mut Node, } ``` I used 2 util functions: ```rust pub unsafe fn raw_into_box<T>(raw: *mut T) -> Box<T> { mem::transmute(raw) } pub fn box_into_raw<T>(b: Box<T>) -> *mut T { unsafe { mem::transmute(b) } } ``` And, I want to create a insert method to the list type: ```rust impl IntList { fn new() -> Self { IntList { head: ptr::null_mut(), tail: ptr::null_mut(), size: 0, } } unsafe fn add_last(&mut self, v: i32) { let node = Node { data: v, prev: self.tail, next: ptr::null_mut() }; let raw = box_into_raw(Box::new(node)); let node = *raw; match self.size { 0 => { self.head = raw; self.tail = raw; } _ => unsafe { // let mut last = *self.tail; // Notice here // last.next = raw; // not work self.tail = raw; } } self.size += 1; println!("added:{}", node); } // ... ``` Behold, the lines commented don't work as expected. The elements in list will be only one-sized. I changed them, however, like this ```rust (*self.tail).next = raw; ``` the result will be correct. --- So my question is, what's the difference between the dichotomy: ```rust (*self.tail).next = raw; ``` vs ```rust let mut last = *self.tail; last.next = raw; ``` The former works but the latter not. Why can't I use an intermediate object?
what's the difference between whether using an intermediate variable in Rust lang?
|c++|rust|memory-management|rust-tokio|
This unfortunately wasn't intuitive from the quarto docs, but looking at their website source code helped me figure it out. Below is a minimal example to help you get what you wanted. `index.qmd` ```r --- title: Home --- Welcome to my website. ``` `blog.qmd` ```r --- title: Blog Home --- Welcome to my blog landing page. ``` `2024_posts/april_01.qmd` ```r --- title: Blog Post --- Here is my first post. ``` Here is the important part: in your `_quarto.qml` file where you define the side bar for your specific page [see sidebar documentation](https://quarto.org/docs/websites/website-navigation.html#side-navigation), you *must* include the parent page in the sidebar contents. `_quarto.yml` ```yml project: type: website website: navbar: left: - index.qmd - blog.qmd sidebar: - id: blog collapse-level: 2 contents: - blog.qmd - section: "2024" contents: 2024_posts/* ```
You need to [create scope][1] before calling Scoped service. ``` public static void DoSomething(WebApplication app) { using IServiceScope? scope = app.Services.CreateScope(); IConfigurateServices configService = scope.ServiceProvider.GetRequiredService<IConfigurateServices>(); } ``` [1]: https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.dependencyinjection.serviceproviderserviceextensions.createscope?view=dotnet-plat-ext-6.0
Logging error coming after failed insert to db using `ent` shows: `ent: constraint failed: FOREIGN KEY constraint failed`. Is there a way to get more verbose formatting of the error, e.g. `FOREIGN KEY` that wasn't set? I'm using below code for logging the error: ``` import ( "github.com/labstack/gommon/log" ) ... err := // some failed insert to db if err != nil { log.Errorf("Error is %v", err) } ... ``` `err` doesn't seem to provide any relevant methods.
"ent: constraint failed: FOREIGN KEY constraint failed" get more verbose error output
|go|logging|error-handling|ent|
Let's say we have an HTTP request made by the client. The endpoint exists and is accessible by the client (this rules out 401, 403, 404 and 405). The request payload is valid (this rules out 400). The server is alive and well, and is able to handle the request and return a response (this rules out 5xx). The error arises within the processing of the request. Examples of such errors may include: * Business validation error. * Querying for an entity in a database that does not exist. Assume that the database lookup is only one part of the request processing pipeline (e.g. not the client request itself). * The server that handles the original client request makes an internal HTTP request that fails. In this case, the handling server is alive and well, while the internal HTTP request may return a 5xx. Assume that the internal HTTP request is only one part of the request processing pipeline (e.g. not the client request itself). What is the appropriate HTTP code to assign for these responses? I've seen API docs use 402 ([Stripe][1]) and 422 ([PayPal][2]), though I haven't come across anything definitive. Thoughts from the community welcome! [1]: https://stripe.com/docs/api/errors [2]: https://developer.paypal.com/docs/api/reference/api-responses/#http-status-codes
|express|rest|http|http-response-codes|
How to find the transaction ID from PayPal V2 API? I would like to give the transaction ID for buyer to track their transaction. I can get the order ID and Capture ID but fail to find where to find the transaction ID. Only the deprecated V1 API can get the transaction ID. Is it not available anymore for V2? FYI, the answer for V1 is here https://stackoverflow.com/questions/53669952/paypal-transaction-id
|paypal|
``` library(tidyverse) library(magrittr) df <- data.frame(year = c(1977:1981), set852 = c(1,1,0,0,0), set857=c(0,0,1,1,0), set874=c(0,0,0,1,1)) ``` For each variable set852, set857 and so forth (in the real datasets it's a long list) I want to create a variable that indicates whether there is a change in the time series (values would be "start", "end" and "no change"). The additional variables should look like this: ``` df_final <- data.frame(year = c(1977:1981), c852 = c("start","end","no change","no change","no change"), c857=c("no change","no change","start","end","no change"), c874=c("no change","no change","no change","start","end")) ``` I tried this within the tidyverse with a for-loop, mutate, paste and case_when: ``` set_num <- as.integer(str_extract(colnames(df), "[0-9]+")) for (i in 2:nrow(df)) { df %<>% mutate(paste0("c", set_num[[i]]) = case_when(paste("set", set_num[[i]], sep="")==1 & year == 1977 ~ "start", paste("set", set_num[[i]], sep="")==1 & lag(paste("set", set_num[[i]], sep=""))==0 ~ "start", paste("set", set_num[[i]], sep="")==1 & lead(paste("set", set_num[[i]], sep=""))==0 ~ "end", TRUE~"no change")) } ``` However, the paste-function after mutate is not recognized as a function but as the name of a variable that starts with "paste0("c"....and so forth". How do I get the code to register the paste0-function as a function and not as a string?
You can use the [`REDUCE`](https://support.google.com/docs/answer/12568597?hl=en) function: ```cpp =ARRAYFORMULA(UNIQUE(REDUCE(A2:A,D2:D,LAMBDA(a,c,REGEXREPLACE(a,c,OFFSET(c,,1)))))) ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/k0x36.png
I'm trying to run a Spark job on Dataproc with a custom conda environment. Here's my environment yaml file: ``` name: parallel-jobs-on-dataproc channels: - default dependencies: - python=3.11 - pyspark=3.5.0 - prophet~=1.1.2 ``` I follow the [official documentation](https://cloud.google.com/dataproc/docs/tutorials/python-configuration#conda-related_cluster_properties) and start the cluster with: ``` gsutil cp "environment.yaml" gs://my-bucket-1212/my_folder/environment.yaml gcloud dataproc clusters create my_cluster \ --region=us-east1 \ --image-version=2.2-debian12 \ --properties='dataproc:conda.env.config.uri=gs://my-bucket-1212/my_folder/environment.yaml' ``` When I run this I get an error on each of the nodes, and all seem to have the same cause, that conda environment can't be activated. First I get for the master node: ``` Failed to initialize node my_cluster-m: Component miniconda3 failed to activate See output in: gs://dataproc-staging-us-east1-<project_id>-<job_id>/google-cloud-dataproc-metainfo/<another_id>/my_cluster-m/dataproc-startup-script_output ``` Then I download the file above and the relevant lines inside it are: ``` <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++++ newval=/opt/conda/miniconda3/envs/parallel-jobs-on-dataproc/bin/x86_64-conda-linux-gnu-addr2line <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++++ '[' '!' -x /opt/conda/miniconda3/envs/parallel-jobs-on-dataproc/bin/x86_64-conda-linux-gnu-addr2line ']' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++++ '[' apply = apply ']' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: +++++ echo addr2line <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: +++++ tr a-z+-. A-ZX__ <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++++ thing=ADDR2LINE <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++++ eval 'oldval=$ADDR2LINE' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: /opt/conda/miniconda3/envs/parallel-jobs-on-dataproc/etc/conda/activate.d/activate-binutils_linux-64.sh: line 68: ADDR2LINE: unbound variable <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + exit_code=1 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: ++ date +%s.%N <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local -r end=1710855099.903674478 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local -r runtime_s=255 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + echo 'Component miniconda3 took 255s to activate' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: Component miniconda3 took 255s to activate <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local -r time_file=/tmp/dataproc/components/activate/miniconda3.time <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + touch /tmp/dataproc/components/activate/miniconda3.time <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + cat <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + [[ 1 -ne 0 ]] <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + echo 1 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + log_and_fail miniconda3 'Component miniconda3 failed to activate' 1 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local component=miniconda3 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local 'message=Component miniconda3 failed to activate' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local error_code=1 <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + local client_error_indicator= <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + [[ 1 -eq 2 ]] <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: + echo 'StructuredError{miniconda3, Component miniconda3 failed to activate}' <13>Mar 19 13:31:39 startup-script[1179]: <13>Mar 19 13:31:39 activate-component-miniconda3[3744]: StructuredError{miniconda3, Component miniconda3 failed to activate} ``` Any idea why this might happen? I have tried older Dataproc debian images, the ubuntu image, but none of it seems to work. Am I doing something wrong? Can I fix this on my side?
How to run a Spark job on Dataproc with custom conda env file
|apache-spark|google-cloud-platform|pyspark|conda|dataproc|
How can I make the green (`large` class) paragraph as large as `body` (grey part), or at least larger than `#contents`? I tried the ``` position: relative; width: 150%; left: -25%; ``` trick, but you'll see that when you reduce the width of the whole page, the paragraph keeps that large width and adds a horizontal scrollbar, etc. In my situation, I can't control the HTML part, but I can override the CSS part. <!-- begin snippet: js hide: false console: false babel: false --> <!-- language: lang-css --> body { margin: 0; padding: 0; font-size: 100%; padding-left: 5em; background-color: #ecf0f1; } #sb { position: fixed; left: 0; top: 0; width: 5em; height: 100%; overflow-y: auto; z-index: 1000; background-color: #3498db; } #contents { margin: 0 auto; max-width: 20rem; background-color: #e67e22; } h1, p { background-color: #f1c40f; } p.large { background-color: #2ecc71; } <!-- language: lang-html --> <body> <div id="sb"></div> <div id="contents"> <h1>hello!</h1> <p> I'm baby slow-carb fam synth swag. Adaptogen farm-to-table air plant kickstarter put a bird on it chillwave authentic 3 wolf. </p> <p class="large"> Four dollar toast post-ironic intelligentsia, aesthetic taiyaki small batch succulents readymade shabby chic portland. </p> <p> Ascot lyft grailed 8-bit mlkshk. Fam cornhole woke tattooed offal hot chicken post-ironic hammock hell of chartreuse pok pok gluten-free leggings marxism. </p> </div> </body> <!-- end snippet -->
Insertion part is working properly but when I update custom enum set column, this exception is thrown > java.lang.AssertionError: expectation "expectNext(1)" failed (expected: onNext(1); actual: onError(java.lang.IllegalArgumentException: Cannot encode parameter of type java.util.ArrayList ([TYPE1]))) **DB Config** ``` @Configuration @RequiredArgsConstructor public class DatabaseConfig extends AbstractR2dbcConfiguration { ... missing code @Bean public ConnectionFactory connectionFactory() { PostgresqlConnectionFactory postgresqlConnectionFactory = new PostgresqlConnectionFactory( PostgresqlConnectionConfiguration .builder() ... missing code .codecRegistrar( EnumCodec .builder() .withEnum("interface_type", InterfaceType.class) .build() ) .build() ); ... missing code } @Override protected List<Object> getCustomConverters() { return List.of( new InterfaceTypeWriter() ); } } ``` **Converter** ``` @WritingConverter public class InterfaceTypeWriter extends EnumWriteSupport<InterfaceType> {} ``` **Domain object** ``` import org.springframework.data.annotation.Id; import org.springframework.data.relational.core.mapping.Column; @Table public class DomainObject { @Id private Long id; @Column("interface_types") private Set<InterfaceType> interfaceTypes; //InterfaceType is a custom enum } ``` **Dao service** ``` import org.springframework.data.r2dbc.core.R2dbcEntityTemplate; @Service public class Dao { private final R2dbcEntityTemplate template; public Mono<Long> update(Long id, DomainObject object) { Update update = Update .update("interface_types", object.getInterfaceTypes()); Query query = Query.query(Criteria.where("id").is(id)); return template.update(query, update, DomainObject.class); } ``` **Table** ``` CREATE TABLE my_table ( id BIGINT PRIMARY KEY, interface_types interface_type[] ); ```
The command sometimes succesfully launches the batch file however, sometimes it doesn't work and it immediately closes. I was wondering what I could do to fix this issue. ''' #a couple of the commands I have tried start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat' .\refresh.bat" start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat'" '''
Trying to launch batch file from powershell, and immediately closes
|windows|powershell|batch-file|cmd|server|
# tl;dr - Technical limitation: Multiple inheritance prevents mixing `Enum` with `Record` superclasses. - Workaround: Keep a `record` as member field of your enum ```java NamedIdentity.JILL.detail.nickname() // ➣ "Jill the Archeress" ``` # Multiple inheritance You asked: > is there a reason that enum records don't exist? The technical reason is that in Java, every enum is implicitly a subclass of [`Enum`][1] class while every record is implicitly a subclass of [`Record`][2] class. Java does not support multiple inheritance. So the two cannot be combined. Perhaps a workaround could have been devised. The Java team considered a feature to be able to combine enum with record, but decided against it for whatever reasons. See [Comment above](https://stackoverflow.com/questions/65995692/why-cant-enum-and-record-be-combined/65998549#comment124302508_65995692) by [Brian Goetz][3], the *Java Language Architect* at Oracle. # Semantics But more important is the semantics. An enum is for declaring at compile time a limited set of named instances. Each of the enum objects is instantiated when the enum class loads. At runtime, we cannot instantiate any more objects of that class (well, maybe with extreme reflection/introspection code, but I’ll ignore that). A record in Java is not automatically instantiated. Your code instantiated as many objects of that class as you want, by calling `new`. So not at all the same. # Boilerplate reduction is *not* the goal of `record` You said: > I thought that Java 14's record keyword would have saved me the boilerplate code You misunderstand the purpose of `record` feature. I suggest you read [JEP 395][4], and watch the latest presentation by Brian Goetz on the subject. As [commented by Johannes Kuhn][5], the goal of `record` is *not* to reduce boilerplate. Such reduction is a pleasant side-effect, but is not the reason for the invention of `record`. A record is meant to be a “nominal tuple” in formal terms. - *Tuple* means a collection of values of various types laid out in a certain order, or as Wikipedia says: “finite ordered list (sequence) of elements”. - *Nominal* means the elements each have a name. A record is meant to be a simple and **transparent data carrier**. *Transparent* means all its member fields are exposed. It’s getter methods are simply named the same as the field, without using the `get…` convention of JavaBeans. The default implantation of `hashCode` and `equals` is to inspect each and every member field. The intention for a record is to focus on the data being carried, not behavior (methods). Furthermore, a record is meant to be **shallowly immutable**. *Immutable* means you cannot change the primitive values, nor can you change the object references, in an instance of a record. The objects within a record instance may be mutable themselves, which is what we mean by *shallowly*. But values of the record’s own fields, either primitive values or object references, cannot be changed. You cannot re-assign a substitute object as one of the record’s member fields. - When you have a limited set of instances known at compile time, use `enum`. - When you are writing a class whose primary job is to immutably and transparently carry a group of data fields, use `record`. # Worthy question I can see where the two concepts could intersect, where at compile time we know of a limited set of immutable transparent collections of named values. So your question is valid, but not because of boilerplate reduction. [Brian Goetz said](https://stackoverflow.com/questions/65995692/why-cant-enum-and-record-be-combined/65998549#comment124302508_65995692) the Java team did indeed consider this idea. # Workaround: Store a `record` on your `enum` The workaround is quite simple: **Keep a `record` instance on your enum.** You could pass a record to your enum constructor, and store that record as a member field on the enum definition. Make the member field `final`. That makes our enum [immutable][6]. So, no need to mark private, and no need to add a getter method. First, the `record` definition. ```java package org.example; public record Performer(int id , String nickname) { } ``` Next, we pass an instance of `record` to the enum constructor. ```java package org.example; public enum NamedIdentity { JOHN( new Performer( 1 , "John the Slayer" ) ), JILL( new Performer( 2 , "Jill the Archeress" ) ); final Performer performer; NamedIdentity ( final Performer performer ) { this.performer = performer; } } ``` If the `record` only makes sense within the context of the enum, we can nest the two together rather than have separate `.java` files. The `record` feature was built with nesting in mind, and works well there. Naming the nested `record` might be tricky in some cases. I imagine something like `Detail` might do as a plain generic label if no better name is obvious. ```java package org.example; public enum NamedIdentity { JOHN( new Performer( 1 , "John the Slayer" ) ), JILL( new Performer( 2 , "Jill the Archeress" ) ); final Performer performer; NamedIdentity ( final Performer performer ) { this.performer = performer; } public record Performer(int id , String nickname) {} } ``` To my mind, this workaround is a solid solution. We get clarity in the code with the bonus of reducing boilerplate. I like this as a general replacement for keeping a bunch of data fields on the enum, as using a `record` makes the intention explicit and obvious. I expect to use this in my future work, thanks to your Question. Let's exercise that code. ```java for ( NamedIdentity namedIdentity : NamedIdentity.values() ) { System.out.println( "---------------------" ); System.out.println( "enum name: " + namedIdentity.name() ); System.out.println( "id: " + namedIdentity.performer.id() ); System.out.println( "nickname: " + namedIdentity.performer.nickname() ); } System.out.println( "---------------------" ); ``` When run. ```none --------------------- enum name: JOHN id: 1 nickname: John the Slayer --------------------- enum name: JILL id: 2 nickname: Jill the Archeress --------------------- ``` # Declare locally FYI, now in Java 16+, we can declare enums, records, and interfaces locally. This came about as part of the work done to create the records feature. See [*JEP 395: Records*][4]. So enums, records, and interfaces can be declared at any of three levels: - In their own `.java.` file. - Nested within a class. - Locally, within a method. I mention this in passing, not directly relevant to this Question. [1]: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Enum.html [2]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/lang/Record.html [3]: https://www.linkedin.com/in/briangoetz [4]: https://openjdk.java.net/jeps/395 [5]: https://stackoverflow.com/questions/65995692/why-cant-enum-and-record-be-combined/65998549#comment116684242_65995692 [6]: https://en.wikipedia.org/wiki/Immutable_object
I need to send configuration data to a device from Thingsboard (CE) via MQTT. The device expects a specific JSON structure and changing the firmware of the device is not an option. I looked into the [client side RPC API](https://thingsboard.io/docs/reference/mqtt-api/#client-side-rpc) but the required pub/sub scheme is not supported by the device. I looked into the [Attributes API](https://thingsboard.io/docs/reference/mqtt-api/#subscribe-to-attribute-updates-from-the-server) which seemed promising. I tried to use the [JSON value support](https://thingsboard.io/docs/reference/mqtt-api/#json-value-support) for shared attributes but it stores and subsequntly publishes the attribute as a key/value pair with my JSON as the value: ``` { "sharedAttributeKey": { "myKey1": "myValue1, "myKey2": "myValue2" } } ``` But I need it so publish: ``` { "myKey1": "myValue1, "myKey2": "myValue2" } ``` I tried to use the rule chain to reformat the JSON to the required format, but Thingsboard publishes the message before the rule chain is executed. What am I doing wrong? Any helo would be highly aprreciated.
thingsboard: reformat shared attribute JSON before publishing via MQTT
|iot|thingsboard|
null
Incase someone is still facing the issue. I still haven't figured out the root cause for this. But here is a workaround that worked for me. I downloaded the plugin source code from the repo unzipped and added to the plugins folder with proper name (remove master and the end of the folder) Now run your cordova add(Just to add the plugin to package.json file) Also i find this answer helpful - https://stackoverflow.com/questions/62192348/ionic-failed-to-fetch-plugin-via-registry
You can remove `email` by re-importing the user. However, it might remove other properties like `metadata`. ```js (async () => { const uid = "<user uid>"; const user = await auth.getUser(uid); await auth.importUsers([ { uid, providerData: user.providerData, customClaims: user.customClaims, // ... set other properties you need but email }, ]); })(); ```
The themes were moved to `PathToQt\Qt\Tools\QtCreator\share\qtcreator\styles`, where the `.xml` file is to be placed.
I know very little about windows api, but now I need to implement some functionality using them. Before deleting a file, I need to check the write permission of the current folder. If administrative permission is required to write, use administrator permission to delete. If administrator permission is not required, use non-administrator permission to delete. My current process is started with administrator privileges. How should I adjust the permissions of the current process? I used OpenProcessToken, LookupPrivilegeValue and AdjustTokenPrivilege to turn off the process SE_DEBUG_NAME permissions, and then when I deleted a file that could not be deleted by ordinary users, the file was indeed deleted. Does anyone know where I'm going wrong? [folder Permissions](https://i.stack.imgur.com/doYWw.png) [file Permissions](https://i.stack.imgur.com/egCdg.png) ```c++ #include <windows.h> #include <iostream> bool SetPrivilege(bool bEnablePrivilege) { HANDLE hToken = nullptr; if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { std::cout << "OpenProcessToken error: " << GetLastError() << std::endl; return false; } std::cout << "hToken: " << hToken << std::endl; LUID luid; if (!LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luid)) { std::cout << "LookupPrivilegeValue error: " << GetLastError() << std::endl; return false; } TOKEN_PRIVILEGES tp; tp.PrivilegeCount = 1; tp.Privileges[0].Luid = luid; if (bEnablePrivilege) tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; else tp.Privileges[0].Attributes = 0; if (!AdjustTokenPrivileges(hToken, false, &tp, sizeof(TOKEN_PRIVILEGES), (PTOKEN_PRIVILEGES)NULL, (PDWORD)NULL)) { std::cout << "AdjustTokenPrivileges error: " << GetLastError() << std::endl; return false; } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { std::cout << "The token does not have the specified privilege." << std::endl; return false; } return true; } int main() { if (!SetPrivilege(false)) { std::cout << "SetPrivilege failed!" << std::endl; return 0; } LPCWSTR filePath = L"C:\\Users\\SH0249\\Desktop\\floder\\sample.txt"; DeleteFile(filePath); getchar(); return 0; } ```
Creating multiple variables with for-loop and "paste"
|r|for-loop|tidyverse|paste|
null
Normally, errors in Tcl include line numbers and filenames when applicable. However, I'm finding that when errors occur in scripts executed with ```interp eval```, I do not get this information. The two examples below are exactly the same, except that Example 1 evaluates the code in the main/parent interpreter, while Example 2 evaluates it in the child interpreter. Example 1 ``` #!/bin/sh # This line continues for Tcl, but is a single line for 'sh' \ exec tclsh "$0" ${1+"$@"} ::safe::interpCreate i eval expr {"a" + 1} ``` Output ``` ./example.tcl invalid bareword "a" in expression "a + 1"; should be "$a" or "{a}" or "a(...)" or ... (parsing expression "a + 1") invoked from within "expr "a" + 1" ("eval" body line 1) invoked from within "eval expr {"a" + 1}" (file "./example.tcl" line 6) ``` Example 2 ``` #!/bin/sh # This line continues for Tcl, but is a single line for 'sh' \ exec tclsh "$0" ${1+"$@"} ::safe::interpCreate i i eval expr {"a" + 1} ``` Output ``` ./example.tcl invalid bareword "a" in expression "a + 1"; should be "$a" or "{a}" or "a(...)" or ... (parsing expression "a + 1") invoked from within "expr "a" + 1" invoked from within "i eval expr {"a" + 1}" (file "./example.tcl" line 6) ``` The error messages are nearly the same, except one line is missing in Example 2's output: ``` ("eval" body line 1)``` In this example, missing that part of the error message is not a problem, since there is only one line of code being evaluated; if it were a large script, or if the error occurred when ```source```'ing a file, that might be a different story. This behavior seems weird; partially because because it is inconsistent, but also because the child interpreter must know which code it is executing, so it should be able to report the line numbers of errors in that code; also, when ```source```ing a file, it should know the file it is reading the code from, since the source command was invoked from the child. So is there any way to get line and file information when using ```interp eval```? Alternatively, is there a way to write this code differently that could provide better error messages in scripts run in child interpreters? Example 3 (passing code to child interpreter as a single argument). ``` #!/bin/sh # This line continues for Tcl, but is a single line for 'sh' \ exec tclsh "$0" ${1+"$@"} ::safe::interpCreate i i eval {expr {"a" + 1}} ``` Output ``` ./example.tcl can't use non-numeric string as operand of "+" invoked from within "expr {"a" + 1}" invoked from within "i eval {expr {"a" + 1}}" (file "./example.tcl" line 6) ```
Hi please help me I'm having trouble getting a Python script to work with Firefox and Selenium I ran the command with pytest and this comes out please help me I use VPS Linux ubuntu to start this script pytest /usr/local/bin/ciaobot/ciao.py --indirizzo "https://www.test.it" --profilo "ciao1" ========================================================== test session starts ========================================================== platform linux -- Python 3.10.12, pytest-8.0.2, pluggy-1.4.0 rootdir: /usr/local/bin/ciaobot plugins: devtools-0.12.2, anyio-4.3.0 collected 1 item ../usr/local/bin/ciao/ciao.py F [100%] =============================================================== FAILURES ================================================================ ______________________________________________________________ test_fbpost ______________________________________________________________ params = {'filelista': None, 'gruppo': None, 'indirizzo': 'https://www.test.it', 'messaggio': None, ...} def test_fbpost(params): profile_path = '/root/.mozilla/firefox-esr/' + params['profilo'] options = Options() options.add_argument('-profile') options.add_argument(profile_path) options.add_argument("--width=1665") options.add_argument("--height=1720") options.set_preference('permissions.default.image', 2) options.set_preference('layout.css.devPixelsPerPx', '0.6') service = Service('/usr/local/bin/geckodriver', log_path='/dev/null') > driver = Firefox(service=service, options=options) /usr/local/bin/ciaobot/ciao.py:34: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.10/dist-packages/selenium/webdriver/firefox/webdriver.py:69: in __init__ super().__init__(command_executor=executor, options=options) /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:208: in __init__ self.start_session(capabilities) /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:292: in start_session response = self.execute(Command.NEW_SESSION, caps)["value"] /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:347: in execute self.error_handler.check_response(response) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x7f1232ca8490> response = {'status': 500, 'value': '{"value":{"error":"unknown error","message":"Process unexpectedly closed with status 1","stacktrace":""}}'} def check_response(self, response: Dict[str, Any]) -> None: """Checks that a JSON response from the WebDriver does not have an error. :Args: - response - The JSON response from the WebDriver server as a dictionary object. :Raises: If the response contains an error message. """ status = response.get("status", None) if not status or status == ErrorCode.SUCCESS: return value = None message = response.get("message", "") screen: str = response.get("screen", "") stacktrace = None if isinstance(status, int): value_json = response.get("value", None) if value_json and isinstance(value_json, str): import json try: value = json.loads(value_json) if len(value) == 1: value = value["value"] status = value.get("error", None) if not status: status = value.get("status", ErrorCode.UNKNOWN_ERROR) message = value.get("value") or value.get("message") if not isinstance(message, str): value = message message = message.get("message") else: message = value.get("message", None) except ValueError: pass exception_class: Type[WebDriverException] e = ErrorCode() error_codes = [item for item in dir(e) if not item.startswith("__")] for error_code in error_codes: error_info = getattr(ErrorCode, error_code) if isinstance(error_info, list) and status in error_info: exception_class = getattr(ExceptionMapping, error_code, WebDriverException) break else: exception_class = WebDriverException if not value: value = response["value"] if isinstance(value, str): raise exception_class(value) if message == "" and "message" in value: message = value["message"] screen = None # type: ignore[assignment] if "screen" in value: screen = value["screen"] stacktrace = None st_value = value.get("stackTrace") or value.get("stacktrace") if st_value: if isinstance(st_value, str): stacktrace = st_value.split("\n") else: stacktrace = [] try: for frame in st_value: line = frame.get("lineNumber", "") file = frame.get("fileName", "<anonymous>") if line: file = f"{file}:{line}" meth = frame.get("methodName", "<anonymous>") if "className" in frame: meth = f"{frame['className']}.{meth}" msg = " at %s (%s)" msg = msg % (meth, file) stacktrace.append(msg) except TypeError: pass if exception_class == UnexpectedAlertPresentException: alert_text = None if "data" in value: alert_text = value["data"].get("text") elif "alert" in value: alert_text = value["alert"].get("text") raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here > raise exception_class(message, screen, stacktrace) E selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 1 /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py:229: WebDriverException ======================================================== short test summary info ======================================================== FAILED ../usr/local/bin/ciaobot/ciao.py::test_fbpost - selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 1 ========================================================== 1 failed in 10.21s =========================================================== I need help please thank you
null
Hi please help me I'm having trouble getting a Python script to work with Firefox and Selenium I ran the command with pytest and this comes out please help me I use VPS Linux Ubuntu to start this script ```none pytest /usr/local/bin/ciaobot/ciao.py --indirizzo "https://www.test.it" --profilo "ciao1" ========================================================== test session starts ========================================================== platform linux -- Python 3.10.12, pytest-8.0.2, pluggy-1.4.0 rootdir: /usr/local/bin/ciaobot plugins: devtools-0.12.2, anyio-4.3.0 collected 1 item ../usr/local/bin/ciao/ciao.py F [100%] =============================================================== FAILURES ================================================================ ______________________________________________________________ test_fbpost ______________________________________________________________ params = {'filelista': None, 'gruppo': None, 'indirizzo': 'https://www.test.it', 'messaggio': None, ...} def test_fbpost(params): profile_path = '/root/.mozilla/firefox-esr/' + params['profilo'] options = Options() options.add_argument('-profile') options.add_argument(profile_path) options.add_argument("--width=1665") options.add_argument("--height=1720") options.set_preference('permissions.default.image', 2) options.set_preference('layout.css.devPixelsPerPx', '0.6') service = Service('/usr/local/bin/geckodriver', log_path='/dev/null') > driver = Firefox(service=service, options=options) /usr/local/bin/ciaobot/ciao.py:34: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.10/dist-packages/selenium/webdriver/firefox/webdriver.py:69: in __init__ super().__init__(command_executor=executor, options=options) /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:208: in __init__ self.start_session(capabilities) /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:292: in start_session response = self.execute(Command.NEW_SESSION, caps)["value"] /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py:347: in execute self.error_handler.check_response(response) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x7f1232ca8490> response = {'status': 500, 'value': '{"value":{"error":"unknown error","message":"Process unexpectedly closed with status 1","stacktrace":""}}'} def check_response(self, response: Dict[str, Any]) -> None: """Checks that a JSON response from the WebDriver does not have an error. :Args: - response - The JSON response from the WebDriver server as a dictionary object. :Raises: If the response contains an error message. """ status = response.get("status", None) if not status or status == ErrorCode.SUCCESS: return value = None message = response.get("message", "") screen: str = response.get("screen", "") stacktrace = None if isinstance(status, int): value_json = response.get("value", None) if value_json and isinstance(value_json, str): import json try: value = json.loads(value_json) if len(value) == 1: value = value["value"] status = value.get("error", None) if not status: status = value.get("status", ErrorCode.UNKNOWN_ERROR) message = value.get("value") or value.get("message") if not isinstance(message, str): value = message message = message.get("message") else: message = value.get("message", None) except ValueError: pass exception_class: Type[WebDriverException] e = ErrorCode() error_codes = [item for item in dir(e) if not item.startswith("__")] for error_code in error_codes: error_info = getattr(ErrorCode, error_code) if isinstance(error_info, list) and status in error_info: exception_class = getattr(ExceptionMapping, error_code, WebDriverException) break else: exception_class = WebDriverException if not value: value = response["value"] if isinstance(value, str): raise exception_class(value) if message == "" and "message" in value: message = value["message"] screen = None # type: ignore[assignment] if "screen" in value: screen = value["screen"] stacktrace = None st_value = value.get("stackTrace") or value.get("stacktrace") if st_value: if isinstance(st_value, str): stacktrace = st_value.split("\n") else: stacktrace = [] try: for frame in st_value: line = frame.get("lineNumber", "") file = frame.get("fileName", "<anonymous>") if line: file = f"{file}:{line}" meth = frame.get("methodName", "<anonymous>") if "className" in frame: meth = f"{frame['className']}.{meth}" msg = " at %s (%s)" msg = msg % (meth, file) stacktrace.append(msg) except TypeError: pass if exception_class == UnexpectedAlertPresentException: alert_text = None if "data" in value: alert_text = value["data"].get("text") elif "alert" in value: alert_text = value["alert"].get("text") raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here > raise exception_class(message, screen, stacktrace) E selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 1 /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py:229: WebDriverException ======================================================== short test summary info ======================================================== FAILED ../usr/local/bin/ciaobot/ciao.py::test_fbpost - selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 1 ========================================================== 1 failed in 10.21s =========================================================== ``` I need help please thank you
How to link an agency management application to Amadeus
Create test data: CREATE UNLOGGED TABLE foo( id INT NOT NULL, created_at INT NOT NULL, data INT NOT NULL ); INSERT INTO foo SELECT n, random()*10000000, n FROM generate_series(1,40000000) n; CREATE INDEX ON foo(id); CREATE INDEX ON foo(created_at); VACUUM ANALYZE foo; Due to the large number of ids's in the query I'll use python: ids = [ n*100 for n in range(100000) ] cursor.execute( """ EXPLAIN ANALYZE SELECT * FROM foo WHERE id =ANY(%s) AND created_at BETWEEN 1000000 AND 3000000 """, (ids,) ) for row in cursor: print(row[0][:200]) Index Scan using foo_id_idx on foo (cost=0.56..334046.00 rows=20121 width=12) (actual time=8.092..331.779 rows=19845 loops=1) Index Cond: (id = ANY ('{0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600, Filter: ((created_at >= 1000000) AND (created_at <= 3000000)) Rows Removed by Filter: 80154 Planning Time: 39.578 ms Execution Time: 358.758 ms Planning is slow, due to the large array. It is using the index on id to fetch rows, then filters them based on created_at. Thus rows not satisfying the condition on created_at still require heap fetches. Including created_at in the index would be useful. An index on (created_at,id) would allow to scan the requested range of created_at, but it cannot index on ids. So the ids would have to be pulled out of the index and filtered. This would only be useful if the condition on created_at is very narrow and the most selctive in the query. Looking at the row counts in your EXPLAIN, I don't feel this is the case. An index with id as the first column allows to fetch rows for each id directly. Then created_at has to be compared with the requested range. I feel this is more useful. CREATE INDEX ON foo( id ) INCLUDE ( created_at ); Index Scan using foo_id_created_at_idx on foo (cost=0.56..334046.00 rows=20121 width=12) (actual time=3.955..278.250 rows=19845 loops=1) Index Cond: (id = ANY ('{0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600, Filter: ((created_at >= 1000000) AND (created_at <= 3000000)) Rows Removed by Filter: 80154 Planning Time: 37.395 ms Execution Time: 299.370 ms This pulls created_at from the index, avoiding heap fetches for rows that will be rejected, so it is slightly faster. CREATE INDEX ON foo( id, created_at ); This would be useful if there were many rows for each id, each having a different created_at value, which is not the case here. This query may cause lots of random IOs, so if the table is on spinning disk and not SSD, it will take a lot longer. Using IN() instead of =ANY() does not change anything. Besides including created_at in the index to avoid extra IO, there's not much opportunity to make it faster. This will need one index scan per id, there are 100k, so it comes down to 3µs per id which is pretty fast. Transferring that many rows to the client will also take time. If you really need it faster, I'd recommend splitting the batches of id's into smaller chunks, and executing it in parallel over several connections. This has the advantage of parallelizing data encoding and decoding, and also processing on the client. The following parallel python code runs in 100ms, which is quite a bit faster. db = None def query( ids ): if not ids: return global db if not db: db = psycopg2.connect("user=peufeu password=Ast3ri* dbname=test") db.cursor().execute( "PREPARE myplan AS SELECT * FROM unnest($1::INTEGER[]) get_id JOIN foo ON (foo.id=get_id AND foo.created_at BETWEEN $2 AND $3)") cursor = db.cursor() cursor.execute( "EXECUTE myplan(%s,1000000,3000000)", (ids,) ) if __name__ == "__main__": ids = [ n*100 for n in range(100000) ] chunks = [ids[offset:(offset+1000)] for offset in range( 0, len(ids)+1, 1000 )] st = time.time() with Pool(10) as p: p.map(query, chunks) print( time.time()-st )
I have encountered interesting case during practicing writing Python algorithms on Leetcode. *The task: "You are given two integer arrays nums1 and nums2, sorted in non-decreasing order, and two integers m and n, representing the number of elements in nums1 and nums2 respectively." The result should be stored inside the nums1 (which has length m+n, where n represents those elements that should be ignored from nums1 and added from nums2).* I have came with the simples solution, It is giving the expected results in VSCode but it is not passing test cases designed in LeetCode. I am wondering why... What I am missing? `class Solution(object): def merge(self, nums1, m, nums2, n): """ :type nums1: List[int] :type m: int :type nums2: List[int] :type n: int :rtype: None Do not return anything, modify nums1 in-place instead. """ nums1 = sorted(nums1[0:m] + nums2[0:n]) nums1 = [1, 2, 3, 0, 0, 0] m = 3 nums2 = [2, 5, 6] n = 3 solution = Solution() result = solution.merge(nums1, m, nums2, n) print(result)` Please note I am a beginner. I am probably not taking something important into consideration. I guess it might me unwisely updating nums1 because I am loosing information about array, but is there anything else? I couldn't find code for test cases in LeetCode to understand why problem occurred. Maybe anyone could help and guide me where to look for it?
|azure|azure-devops|odata|
null
I'm trying to create simply connect with ActiveMQ using JNDI. I have: 1. Queue named `example.A`. 2. According [ActiveMQ documentation touching JNDI][1], if I want to use ConectionFactories and Queues (Topics) via JNDI, I have to place `jndi.properties` file on my classpath. As I have understood, ActiveMQ classpath is `%activemq%/conf` directory by default. I have not changed it. So I have this property for my queue: ``` queue.MyQueue = example.A ``` 3. I have created java client class for ActiveMQ which uses JNDI as below: ```java Properties jndiParameters = new Properties() ; jndiParameters.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.jndi.ActiveMQInitialContextFactory"); jndiParameters.put(Context.PROVIDER_URL, "tcp://localhost:61616"); Context context = new InitialContext(jndiParameters); ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("ConnectionFactory"); Queue queue = (Queue) context.lookup("MyQueue"); ``` But it cannot find my queue. It throws exception: ``` javax.naming.NameNotFoundException: MyQueue ``` Where are my mistakes? [1]: http://activemq.apache.org/jndi-support.html
**GNU `head`** is another option (remove the last N characters from the input string _as a whole_ - even if it spans multiple lines): # GNU `head` only (doesn't work with the `head` that comes with macOS) # CAVEAT: -c operates on *bytes*, not characters. $ printf '1461624333' | head -c -3 # drop last 3 chars. 1461624 Note that if you use `echo` instead of `printf`, there's a trailing `\n` that must be taken into account, so you must add `1` to the count (and the output won't have a trailing `\n`). **Caveat:** `head -c`, despite what the short option name suggests, **operates on _bytes_ rather than _characters_**, and therefore only works robustly with single-byte encodings, such as ASCII and extended-ASCII variants (e.g., ISO-8859=1), and therefore _not_ reliably with **UTF-8**: **only if the last N characters all happen to be ASCII-range characters** will the result be as expected. Using the _long_ form of the option name, `--bytes`, makes this limitation more obvious. --- As an aside: * If you want to use GNU utilities on **macOS** too, you can install them via [Homebrew](https://brew.sh): `brew install coreutils` This installs them with a `g` prefix by default (e.g. `ghead`), so as not to override the built-in utilities, but you can override the latter by modifying your `PATH` as instructed in the instructions printed on installation.
|c++|windows|
null
Not sure exactly what the problem is. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> table { border-collapse: collapse; } th, td { border: 1px solid white; padding: 8px; } th { text-align: center; } tr:not(:last-child)>:first-child, thead th:first-child { border-bottom: 1px solid #ccc; } tr>:nth-child(2), tr>:nth-child(5) { background-color: #006400; color: white; } tr>:nth-child(3), tr>:nth-child(6) { background-color: #ffbec8; } tr>:nth-child(4) { background-color: #808080; color: white; } <!-- language: lang-html --> <table> <thead> <tr> <th>Time</th> <th class="cell1">Monday</th> <th class="cell2">Tuesday</th> <th class="cell3">Wednesday</th> <th class="cell1">Thursday</th> <th class="cell2">Friday</th> </tr> </thead> <tbody> <tr> <td>7:00</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>8:00</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>9:00</td> <td class="cell1">CMSC 132</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <!-- end snippet -->
As others pointed out make sure where to store your files. After that You should add `download` attribute to your anchor tag. [link](https://www.w3schools.com/TAGS/att_a_download.asp) > The download attribute specifies that the target (the file specified in the href attribute) will be downloaded when a user clicks on the hyperlink. > >The optional value of the download attribute will be the new name of the file after it is downloaded. ### static files: ```html <a href="{% static 'files/{{ filename }}' %}" download="{{ filename }}">Download: {{ filename }} </a> ``` ### media files: ```html <a href="{{ dynamic_filename_url }}" download="{{ filename }}">Download: {{ filename }} </a> ``` If you store your media files in the filesystem of your django app, you can use [url tag](https://docs.djangoproject.com/en/5.0/ref/templates/builtins/#url)
ModuleNotFoundError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app). Traceback: File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script exec(code, module._dict_) File "/mount/src/finalyearproject/streamlit.py", line 2, in <module> import tensorflow as tf Reviewed the Streamlit logs for additional error messages or clues about the issue How to fix it?
`scanf` **returns** EOF ``` int main(){ char c; int result; while(1){ result = scanf("%c", &c); if(result == EOF) break; else{ c = toupper(c); printf("%c", c); } } return 0; } ``` You will not EOF until you type special keyboard key combination. - Linux: CTRL-D - Windows: CTRLZ-Z
I'm getting an inexplicable error in my Eclipse IDE: The type java.util.Collection cannot be resolved. It is indirectly referenced from required type picocli.CommandLine It is not allowing me to iterate a List object like the following: ``` import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import java.time.LocalDateTime; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import gov.uscourts.bnc.InputReader; import picocli.CommandLine; import picocli.CommandLine.Command; import picocli.CommandLine.Option; ... List<ServerRecord> data = execute(hostnames); for (ServerRecord record : data) { hostInfoList.add(new String[] { record.hostname(), record.ip(), record.mac(), record.os(), record.release(), record.version(), record.cpu(), record.memory(), record.name(), record.vmware(), record.bios() }); } ``` "data" is underlined in red and it says: Can only iterate over an array or an instance of java.lang.Iterable Tried cleaning the project, rebuilding, checked Java version (21), updated Maven project, pom.xml specified target and source to be 21, deleted project and recreated it. Same error. Here is my pom.xml: ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>gov.uscourts.bnc.app</groupId> <artifactId>server-query</artifactId> <version>1.0.0</version> <name>server-query</name> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.target>21</maven.compiler.target> <maven.compiler.source>21</maven.compiler.source> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.2.0</version> <configuration> <archive> <manifest> <addDefaultImplementationEntries>true</addDefaultImplementationEntries> <addDefaultSpecificationEntries>true</addDefaultSpecificationEntries> <addClasspath>true</addClasspath> <classpathPrefix>libs/</classpathPrefix> <mainClass> gov.uscourts.bnc.app.CollectServerData </mainClass> </manifest> </archive> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>gov.uscourts.bnc</groupId> <artifactId>bnc</artifactId> <version>1.0.0</version> </dependency> <!-- https://mvnrepository.com/artifact/junit/junit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13.2</version> <scope>test</scope> </dependency> <!-- https://mvnrepository.com/artifact/info.picocli/picocli --> <dependency> <groupId>info.picocli</groupId> <artifactId>picocli</artifactId> <version>4.7.5</version> </dependency> </dependencies> </project> ``` **I've just noticed ALL of my projects are doing this now. I have 8 I'm working on. I'm using picocli in all of them. I must have done something to Eclipse and broke something. Some of them were fully tested and running with no problems. **
How to change variables (or recreate) dataframe, with another boolean mask Polars DataFrame? So not just single column vectors (Series), but both a DataFrame. So set the following to 1000, where amount > 270, value at the bottom would become 1000 apples[0].amount apples[1].amount... apples[3].amount apples[4].amount 0 NaN 321.68012 ... NaN NaN 1 NaN NaN ... NaN 259.70487 2 NaN NaN ... NaN 259.70487 3 NaN NaN ... NaN 259.70487 4 NaN NaN ... NaN 259.70487 ... ... ... ... ... ... 440582 79.57273 NaN ... NaN NaN 440583 NaN NaN ... NaN NaN 440584 NaN NaN ... NaN NaN 440585 NaN NaN ... NaN NaN 440586 NaN NaN ... 299.91544 NaN [440587 rows x 5 columns] apples[0].amount apples[1].amount... apples[3].amount apples[4].amount 0 NaN 1000.00000 ... NaN NaN 1 NaN NaN ... NaN 259.70487 2 NaN NaN ... NaN 259.70487 3 NaN NaN ... NaN 259.70487 4 NaN NaN ... NaN 259.70487 ... ... ... ... ... ... 440582 79.57273 NaN ... NaN NaN 440583 NaN NaN ... NaN NaN 440584 NaN NaN ... NaN NaN 440585 NaN NaN ... NaN NaN 440586 NaN NaN ... 1000.00000 NaN [440587 rows x 5 columns]
|python|docker|zap|
Here's a thought: ```r df1 |> mutate( day = as.Date(trunc(DT)), rnk = rank(odczyt.1, ties="first") ) |> filter(.by = day, any(rnk <= 3)) # DT odczyt.1 odczyt.2 day rnk # 1 2023-08-15 05:00:00 224 1.6 2023-08-15 3 # 2 2023-08-15 23:00:00 445 5.6 2023-08-15 11 # 3 2023-08-16 00:00:00 182 1.5 2023-08-16 2 # 4 2023-08-16 23:00:00 493 4.3 2023-08-16 15 # 5 2023-08-19 00:00:00 566 1.5 2023-08-19 20 # 6 2023-08-19 05:00:00 278 7.9 2023-08-19 4 # 7 2023-08-19 17:00:00 561 11.5 2023-08-19 19 # 8 2023-08-19 18:00:00 365 8.5 2023-08-19 6 # 9 2023-08-19 22:00:00 170 1.8 2023-08-19 1 # 10 2023-08-19 23:00:00 456 6.6 2023-08-19 12 ```
null
The input is line-buffered. When you enter characters and hit Enter the buffer contains all the characters you have entered + `'\n'`(new line). The `getchar` function takes one character from this buffer and returns it - Your first program reads one charter from this buffer, prints it and then terminates. - your second program is looping taking the characters from this buffer one by one including the new line `'\n'` character too.
|php|wordpress|woocommerce|roles|
null
{"OriginalQuestionIds":[2124527],"Voters":[{"Id":2943403,"DisplayName":"mickmackusa"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":794749,"DisplayName":"gre_gor"}]}
I was working on my spreadsheet to keep track of my goals. I stuck on some strange problem I can't find any fix for that, not so advanced in google sheets yet :( I made test sheet just to show problem in isolation. https://docs.google.com/spreadsheets/d/19xUOeLoXTPH3heVFMssya5sh8ZeOaIlIBpHnypaDx-A/edit?usp=sharing I need to count amount of specific task in the arrays(weekdays) by specific value. I have weekinfo which checks what day is on the calendar by position - in the test table I just made it equals 2 And then I have IFS function which assigns specific array by the weekinfo number - SA,SB,SC which are just arrays under variable $A$3:$A. But for some reason any IFS - countifs, ifs etc - they can't give arrays as outcome, they just can't process them and they just give error that ifs has mismatched range sizes. Does anyone know what might be a solution for that?