instruction
stringlengths
0
30k
⌀
|ggplot2|label|facet-wrap|
> Regex Match a pattern that only contains one set of numerals, and not more I would start by writing a _grammar_ for the "forgiving parser" you are coding. It is not clear from your examples, for instance, whether `<2112` is acceptable. Must the brackets be paired? Ditto for quotes, etc. Assuming that brackets and quotes do not need to be paired, you might have the following grammar: ##### _sign_ `+` | `-` ##### _digit_ `0` | `1` | `2` | `3` | `4` | `5` | `6` | `7` | `8` | `9` ##### _non-digit_ _any-character-that-is-not-a-digit_ ##### _integer_ [ _sign_ ] _digit_ { _digit_ } ##### _prefix_ _any-sequence-without-a-sign-or-digit_ [ _prefix_ ] _sign_ _non-digit_ [ _any-sequence-without-a-sign-or-digit_ ] ##### _suffix_ _any-sequence-without-a-digit_ ##### _forgiving-integer_ [ _prefix_ ] _integer_ [ _suffix_ ] Notes: - Items within square brackets are optional. They may appear either 0 or 1 time. - Items within curly braces are optional. They may appear 0 or more times. - Items separated by `|` are alternatives from which 1 must be chosen - Items on separate lines are alternatives from which 1 must be chosen One subtlety of this grammar is that integers can have only one sign. When more than one sign is present, all except the last are treated as part of the _prefix_, and, thus, are ignored. Are the following interpretations acceptable? If not, then the grammar must be altered. - `++42` parses as `+42` - `--42` parses as `-42` - `+-42` parses as `-42` - `-+42` parses as `+42` Another subtlety is that whitespace following a sign causes the sign to be treated as part of the prefix, and, thus, to be ignored. This is perhaps counterintuitive, and, frankly, may be unacceptable. Nevertheless, it is how the grammar works. In the example below, the negative sign is ignored, because it is part of the prefix. - `- 42` parses as `42` ### A solution without `std::regex` With a grammar in hand, it should be easier to figure out an appropriate regular expression. My solution, however, is to avoid the inefficiencies of `std::regex`, in favor of coding a simple "parser." In the following program, function `validate_integer` implements the foregoing grammar. When `validate_integer` succeeds, it returns the integer it parsed. When it fails, it throws a `std::runtime_error`. Because `validate_integer` uses `std::from_chars` to convert the integer sequence, it will not convert the test case `2112.0` from the OP. The trailing `.0` is treated as a second integer. All the other test cases work as expected. The only tricky part is the initial loop that skips over non-numeric characters. When it encounters a sign (`+` or `-`), it has to check the following character to decide whether the sign should be interpreted as the start of a numeric sequence. That is reflected in the "tricky" grammar for _prefix_ given above, where a sign must be followed by a non-digit (or another sign). ```lang-cpp // main.cpp #include <cctype> #include <charconv> #include <iomanip> #include <iostream> #include <stdexcept> #include <string> #include <string_view> bool is_digit(unsigned const char c) { return std::isdigit(c); } bool is_sign(const char c) { return c == '+' || c == '-'; } int validate_integer(std::string const& s) { enum : std::string::size_type { one = 1u }; std::string::size_type i{}; // skip over prefix while (i < s.length()) { if (is_digit(s[i]) || is_sign(s[i]) && i + one < s.length() && is_digit(s[i + one])) break; ++i; } // throw if nothing remains if (i == s.length()) throw std::runtime_error("validation failed"); // parse integer // due to foregoing checks, this cannot fail if (s[i] == '+') ++i; // `std::from_chars` does not accept leading plus sign. auto const first{ &s[i] }; auto const last{ &s[s.length() - one] + one }; int n; auto [end, ec] { std::from_chars(first, last, n)}; i += end - first; // skip over suffix while (i < s.length() && !is_digit(s[i])) ++i; // throw if anything remains if (i != s.length()) throw std::runtime_error("validation failed"); return n; } void test(std::ostream& log, bool const expect, std::string s) { std::streamsize w{ 46 }; try { auto n = validate_integer(s); log << std::setw(w) << s << " : " << n << '\n'; } catch (std::exception const& e) { auto const msg{ e.what() }; log << std::setw(w) << s << " : " << e.what() << ( expect ? "" : " (as expected)") << '\n'; } } int main() { auto& log{ std::cout }; log << std::left; test(log, true, "<2112>"); test(log, true, "[(2112)]"); test(log, true, "\"2112, \""); test(log, true, "-2112"); test(log, true, ".2112"); test(log, true, "<span style = \"numeral\">2112</span>"); log.put('\n'); test(log, true, "++42"); test(log, true, "--42"); test(log, true, "+-42"); test(log, true, "-+42"); test(log, true, "- 42"); log.put('\n'); test(log, false, "2112.0"); test(log, false, ""); test(log, false, "21,12"); test(log, false, "\"21\",\"12, \""); test(log, false, "<span style = \"font - size:18.0pt\">2112</span>"); log.put('\n'); return 0; } // end file: main.cpp ``` ### Output The "hole" in the output, below the entry for 2112.0, is the failed conversion of the null-string. ```lang-none <2112> : 2112 [(2112)] : 2112 "2112, " : 2112 -2112 : -2112 .2112 : 2112 <span style = "numeral">2112</span> : 2112 ++42 : 42 --42 : -42 +-42 : -42 -+42 : 42 - 42 : 42 2112.0 : validation failed (as expected) : validation failed (as expected) 21,12 : validation failed (as expected) "21","12, " : validation failed (as expected) <span style = "font - size:18.0pt">2112</span> : validation failed (as expected) ```
I added to go to the load balancer and add a security group that had inbound rules for ports 80/443 and source "0.0.0.0/0".
|php|drupal-7|access-control|administrator|hook-menu|
null
Your code is using a single thread executor task which submits another task to same single thread executor, and then awaits that sub-task to exit. It is the same as this example which would print "ONE" and "THREE", and never print "TWO", "cf" or "FOUR": ExecutorService executor = Executors.newSingleThreadExecutor(); CompletableFuture<Void> future = CompletableFuture.runAsync(() -> { log("ONE"); // This subtask can never start while current task is still running: Future<?> cf = executor.submit(() -> log("TWO")); log("THREE"); try { // Blocks forever if run from single thread executor: log("cf"+cf.get()); } catch (Exception e) { throw new RuntimeException("It failed"); } log("FOUR"); }, executor); The subtask could only run after the main task exits - if "FOUR" was printed - but is stuck awaiting `cf.get()`. The solution is easy - you should process the initial task on separate Thread or executor queue to the service used by the sub-tasks.
How to use unique constraint such that every key:value pair is the same in a json type in postgresql
|postgresql|
null
Heroku does support legacy and outdated Ruby versions. See the list of Ruby versions that are supported on Heroku: https://devcenter.heroku.com/articles/ruby-support#ruby-versions You will need to upgrade to at least Ruby 3.0 to run your application on Heroku. I suggest upgrading to the latest 3.3 version.
I am new to Swift. I have created an app that stores data locally in SQLite when there is no internet connection. Later, it needs to check for an internet connection every 30 minutes and upload the saved data to a server, even if the app is closed or terminated. Can anyone help me with this? Thank you.
For simplicity, let's only look at a translation. For that we assume a scenario where the sun sits at the origin, the planet earth one unit to the right on the x-axis and the moon one unit further right of the earth. If we want to render the sun, planet and the moon we typically would render the sun first, translate one unit to the right, render the earth and lastly translate again one unit to the right and render the moon. If the planet would have two ore more moons instead of one, for each moon we would have to translate to the position of the moon and then back to be again at the origin of the planet. The same goes for many planets orbiting the sun. Remember that any transformation operation requires matrix multiplication. For many moons orbiting the planet and many planets orbiting the sun, that will become costly very fast. Therefore, instead of transforming back, we store the current matrix (`glPushMatrix`) and operate on the copy. To revert back we simply discard the copy and go back to the stored matrix (`glPopMatrix`). From [glPushMatrix/glPopMatrix][1] > `glPushMatrix` pushes the current matrix stack down by one, duplicating the current matrix. That is, after a `glPushMatrix` call, the matrix on top of the stack is identical to the one below it. > > `glPopMatrix` pops the current matrix stack, replacing the current matrix with the one below it on the stack. [1]: https://registry.khronos.org/OpenGL-Refpages/gl2.1/xhtml/glPushMatrix.xml
I am new in react native expo. I am trying to create water reminder app, with react native expo and SQLite. I am constantly getting this error message when adding inserting data into table. ChatGPT and Google Gemini both not being able to solve. LOG Table created successfully ERROR Error checking date in database: {"_complete": false, "_error": null, "_running": true, "_runningTimeout": false, "_sqlQueue": {"first": undefined, "last": undefined, "length": 0}, "_websqlDatabase": {"_currentTask": {"errorCallback": [Function anonymous], "readOnly": false, "successCallback": [Function anonymous], "txnCallback": [Function anonymous]}, "_db": {"_closed": false, "_name": "waterTracker.db", "close": [Function closeAsync]}, "_running": true, "_txnQueue": {"first": undefined, "last": undefined, "length": 0}, "closeAsync": [Function bound closeAsync], "closeSync": [Function bound closeSync], "deleteAsync": [Function bound deleteAsync], "exec": [Function bound exec], "execAsync": [Function bound execAsync], "execRawQuery": [Function bound execRawQuery], "transactionAsync": [Function bound transactionAsync], "version": "1.0"}} ``` import React, { useState, useEffect } from 'react'; import { View, Text, Button, StyleSheet } from 'react-native'; import * as SQLite from 'expo-sqlite'; import { format } from 'date-fns'; const db = SQLite.openDatabase('waterTracker.db'); const App = () => { const [dailyIntake, setDailyIntake] = useState(0); const [currentDate, setCurrentDate] = useState(''); const [isDbInitialized, setIsDbInitialized] = useState(false); useEffect(() => { const initializeDatabase = () => { db.transaction(tx => { tx.executeSql( 'CREATE TABLE IF NOT EXISTS waterIntake (id INTEGER PRIMARY KEY AUTOINCREMENT, date DATE, intakeChange INTEGER, totalIntake INTEGER)', [], () => { console.log('Table created successfully'); setIsDbInitialized(true); }, error => { console.error('Error creating table: ', error); } ); }); }; initializeDatabase(); const date = format(new Date(), 'yyyy-MM-dd'); setCurrentDate(date); db.transaction(tx => { tx.executeSql( 'SELECT * FROM waterIntake WHERE date = ?', [date], (_, { rows }) => { if (rows.length === 0) { db.transaction(tx => { tx.executeSql( 'INSERT INTO waterIntake (date, totalIntake) VALUES (?, ?)', [date, 0], // Set totalIntake to initial value () => { console.log('Today\'s date inserted into the database'); }, error => { console.error('Error inserting date into the database: ', error); } ); }); } else { // Fetch and set daily intake if date exists in the database setDailyIntake(rows.item(0).totalIntake); } }, error => { console.error('Error checking date in database: ', error); } ); }); }, []); const addWater = () => { const intakeChange = 1; if (isDbInitialized) { updateIntakeAndSaveHistory(intakeChange); } else { console.error('Database is not initialized yet.'); } }; const deleteWater = () => { if (dailyIntake > 0) { const intakeChange = -1; if (isDbInitialized) { updateIntakeAndSaveHistory(intakeChange); } else { console.error('Database is not initialized yet.'); } } }; const updateIntakeAndSaveHistory = (intakeChange) => { const newIntake = dailyIntake + intakeChange; setDailyIntake(newIntake); db.transaction(tx => { tx.executeSql( 'INSERT INTO waterIntake (date, intakeChange, totalIntake) VALUES (?, ?, ?)', [currentDate, intakeChange, newIntake], (_, results) => { if (results.rowsAffected > 0) { console.log('Water intake history saved successfully'); } else { console.log('Failed to save water intake history'); } }, error => { console.error('Error saving water intake history: ', error); } ); }); }; return ( <View style={styles.container}> <Text style={styles.header}>Water Tracker</Text> <Button title="Add Glass" onPress={addWater} /> <Text style={styles.text}>Daily Intake: {dailyIntake} glasses</Text> <Button onPress={deleteWater} title="Delete Glass" /> <Text style={styles.text}>Today's Date: {currentDate}</Text> </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#fff', }, header: { fontSize: 24, fontWeight: 'bold', marginBottom: 20, }, text: { fontSize: 18, marginBottom: 10, }, }); export default App; ```
cannot insert in SQLite database in react native expo
|reactjs|sqlite|expo|native|
null
I've been trying to integrate Android Zoom SDK into my flutter project. In overall I **want to embed their Zoom SDK into my flutter** project. I know there's a Zoom SDK for flutter and I could've used it. But some features are not available there since it is still in development. So I thought of implementing Native SDKs into my project. This is the issue I keep running into: ``` FAILURE: Build failed with an exception. * What went wrong: Could not determine the dependencies of task ':app:compileDebugJavaWithJavac'. > Could not resolve all task dependencies for configuration ':app:debugCompileClasspath'. > Could not resolve project :zoom_module. Required by: project :app > No matching configuration of project :zoom_module was found. The consumer was configured to find an API of a component, preferably optimized for Android, as well as attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'debug', attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '7.3.0', attribute 'org.jetbrains.kotlin.platform.type' with value 'androidJvm' but: - None of the consumable configurations have attributes. * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 6s Error: Gradle task assembleDebug failed with exit code 1 ``` # **I want to use this SDK in my the native code of my flutter project**. I have downloaded the zoom SDK and successfully ran it in Android Studio. This included generating token and secret and public keys I've set up the platform channels and they seem to be working fine. Then I copied the whole SDK and tried to add it as a separate module. That's where I keep running into the abovementioned issue. There's no clear tutorials or documentation on how to do this. I've tried couple of fixed I've found here. But they were too generic This is the `build.gradle` file inside `android/app` directory of my flutter project: ``` plugins { id "com.android.application" id "kotlin-android" id "dev.flutter.flutter-gradle-plugin" } def localProperties = new Properties() def localPropertiesFile = rootProject.file('local.properties') if (localPropertiesFile.exists()) { localPropertiesFile.withReader('UTF-8') { reader -> localProperties.load(reader) } } def flutterVersionCode = localProperties.getProperty('flutter.versionCode') if (flutterVersionCode == null) { flutterVersionCode = '1' } def flutterVersionName = localProperties.getProperty('flutter.versionName') if (flutterVersionName == null) { flutterVersionName = '1.0' } android { namespace "com.example.zoom_integration" compileSdkVersion flutter.compileSdkVersion ndkVersion flutter.ndkVersion compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } defaultConfig { // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). applicationId "com.example.zoom_integration" // You can update the following values to match your application needs. // For more information, see: https://docs.flutter.dev/deployment/android#reviewing-the-gradle-build-configuration. minSdkVersion flutter.minSdkVersion targetSdkVersion flutter.targetSdkVersion versionCode flutterVersionCode.toInteger() versionName flutterVersionName } buildTypes { release { // TODO: Add your own signing config for the release build. // Signing with the debug keys for now, so `flutter run --release` works. signingConfig signingConfigs.debug } } dependencies { implementation project (':zoom_module') } } flutter { source '../..' } ``` And this is the `settings.gradle` file inside my `android` directory of my flutter project: ``` pluginManagement { def flutterSdkPath = { def properties = new Properties() file("local.properties").withInputStream { properties.load(it) } def flutterSdkPath = properties.getProperty("flutter.sdk") assert flutterSdkPath != null, "flutter.sdk not set in local.properties" return flutterSdkPath } settings.ext.flutterSdkPath = flutterSdkPath() includeBuild("${settings.ext.flutterSdkPath}/packages/flutter_tools/gradle") repositories { google() mavenCentral() gradlePluginPortal() } plugins { id "dev.flutter.flutter-gradle-plugin" version "1.0.0" apply false } } plugins { id "dev.flutter.flutter-plugin-loader" version "1.0.0" id "com.android.application" version "7.3.0" apply false } include ':app', ':zoom_module' ``` And this is the `MainActivity.java` file: ``` package com.example.zoom_integration; import android.widget.Toast; import androidx.annotation.NonNull; import io.flutter.embedding.android.FlutterActivity; import io.flutter.embedding.engine.FlutterEngine; import io.flutter.plugin.common.MethodCall; import io.flutter.plugin.common.MethodChannel; public class MainActivity extends FlutterActivity { private final String channelName = "com.example.zoom_integration/zoom"; @Override public void configureFlutterEngine(@NonNull FlutterEngine flutterEngine) { super.configureFlutterEngine(flutterEngine); MethodChannel channel = new MethodChannel(flutterEngine.getDartExecutor().getBinaryMessenger(), channelName); channel.setMethodCallHandler(new MethodChannel.MethodCallHandler() { @Override public void onMethodCall(MethodCall call, MethodChannel.Result result) { // Your implementation from step 1 goes here if(call.method.equals("onClickJoinMeeting")) { showToast("Api Devvvss"); } } }); } private void showToast(String message) { Toast.makeText(MainActivity.this, message, Toast.LENGTH_SHORT).show(); } public void onMethodCall(MethodCall call, MethodChannel.Result result) { // Handle the method call based on `call.method` and `call.arguments` // You can use a switch statement or other logic to handle different methods // Send a response using `result.success(data)` or `result.error(code, message, details)` } } ``` This is my folder Structure : ``` zoom_integration ┣ .dart_tool ┣ .idea ┣ android ┃ ┣ .gradle ┃ ┣ .idea ┃ ┣ app ┃ ┣ gradle ┃ ┣ zoom_module ┃ ┃ ┣ dynamic_sample ┃ ┃ ┣ example2 ┃ ┃ ┣ feature_mobilertc ┃ ┃ ┣ gradle ┃ ┃ ┣ mobilertc ┃ ┃ ┣ sample ┃ ┃ ┣ build.gradle ┃ ┃ ┣ gradle.properties ┃ ┃ ┣ gradlew ┃ ┃ ┣ gradlew.bat ┃ ┃ ┣ lock_dependency.md ┃ ┃ ┗ settings.gradle ┃ ┣ .gitignore ┃ ┣ build.gradle ┃ ┣ gradle.properties ┃ ┣ gradlew ┃ ┣ gradlew.bat ┃ ┣ local.properties ┃ ┣ settings.gradle ┃ ┗ zoom_integration_android.iml ┣ build ┣ ios ┣ lib ┃ ┗ main.dart ┣ linux ┣ macos ┣ test ┣ web ┣ windows ┣ .metadata ┣ analysis_options.yaml ┣ pubspec.lock ┣ pubspec.yaml ┣ README.md ┗ zoom_integration.iml ```
I'm stuck with Integrating and configuring the Zoom Android SDK into my flutter project
|java|android|flutter|kotlin|zoom-sdk|
null
I'm in the process of migrate data from elasticsearch 7.17.7, Opensearch AWS When follow step in https://docs.aws.amazon.com/opensearch-service/latest/developerguide/migration.html, i have already create snapshot in ES version 7.17.7, and upload it in S3. I also success register the repository: [enter image description here](https://i.stack.imgur.com/uprkO.png) However, when I am trying to restore the snapshot on AWS, I get the following error: ``` { "error": { "root_cause": [ { "type": "parsing_exception", "reason": "Failed to parse object: unknown field [uuid] found", "line": 1, "col": 25 } ], "type": "repository_exception", "reason": "[snapshot-temp] Unexpected exception when loading repository data", "caused_by": { "type": "parsing_exception", "reason": "Failed to parse object: unknown field [uuid] found", "line": 1, "col": 25 } }, "status": 500 } ``` I tried to list the snapshot and I get the same error. Please let me know if any clue about this error, how to resolve this. Thanks
AWS Opensearch - Restore snapshot - Failed to parse object: unknown field [uuid] found
Recently hopped on Mirror Networking and having some troubles with commands So, I have some blocks that are instantiated at runtime and they can be clicked. My logic is that when block is clicked, then i send a message to a server with id of the block, then server through a game manager, which has list of all blocks, perform a click on the block with that id for each client, using ClientRpc. This is working just fine on the host instance, but not on the client one. The code is simple ``` [Command(requiresAuthority = false)] public void SendClickToAServer(int index) { Debug.Log("Clicked index:" + index); } ``` But I'm having an error and client gets disconnected I need to send a message to a server from client, so I'm using command. When I try to do it on host instance, then every thing is fine, but when i try to send this message from just client then i'm getting an error Disconnecting connection: connection(390227533) because handling a message of type Mirror.CommandMessage caused an Exception. This can happen if the other side accidentally (or an attacker intentionally) sent invalid data. Reason: System.NullReferenceException: Object reference not set to an instance of an object at Mirror.NetworkServer.OnCommandMessage (Mirror.NetworkConnectionToClient conn, Mirror.CommandMessage msg, System.Int32 channelId) \[0x00103\] in \\Mirror\\Core\\NetworkServer.cs:327 I add all needed components and inheritance, as I said on the host instance it is working, i just can not get what object reference is missing. Maybe there is problem with the way i instantiate or spawn objects? If i remove \[Command\] then every thing works fine, but it is hot synced
Can't get commands to work in Mirror Network
|c#|unity-game-engine|networking|multiplayer|mirror|
null
I have Issues which have sub-tasks. If at least one of the sub-task is CLOSED, I want to consider that the Parent issue is CLOSED. And if not, then if there is at least one of the sub-task is SOLVED, I want to consider that the Parent issue is SOLVED. In case of any other status of the sub-tasks, the Parent issue is in state "other" "which is not a real state, juste to agregate other states. In other words, how to display Issues by status that is a synthesis of status of the sub-taks ? I do not know where to start with
eazyBI Issue status built from sub-task status
|mdx|
null
|javascript|function-expression|
I'm trying to read numerous csv files and search for the word 'error'. Using csv.reader is causing an issue as I believe it is looking for a string. I'm not sure if changing to text would help? import glob import os # Imported os import csv import pathlib def find_cell(csv_file, check): with open(csv_file,'r') as file: reader = csv.reader(file) for row_idx, row in enumerate(reader): for col_idx, cell in enumerate(row): if cell == check: return row_idx, col_idx return None def errorCheck(Filename): is_match = False with open(Filename) as f: csv_file = csv.reader(f, delimiter='|') check = 'Error' cell_location = find_cell(csv_file, check) if cell_location: row_idx, col_idx = cell_location print("ERROR") print_and_log('ERROR IN FILE - '+ Filename + 'row '+ {row_idx} +'column ' + {col_idx}) is_match = True if is_match: return
null
Other way to do it is to bind SelectedItem in TwoWay mode of each DataGrid to separate property and set SelectedItem1/SelectedItem2 to null, also you can set 3rd property from their setters, if you need to use selected item somewhere else (to bind OverviewView data for example). Properties in your ViewModel: private SomeType _selectedItem; public SomeType SelectedItem { get => _selectedItem; private set { if (!SetProperty(ref _selectedItem, value)) return; } } private SomeType _selectedItem2; public SomeType SelectedItem2 { get => _selectedItem2; set { if (!SetProperty(ref _selectedItem2, value) || value is null) return; SelectedItem = _selectedItem2; SelectedItem1 = null; } } private SomeType _selectedItem1; public SomeType SelectedItem1 { get => _selectedWaitItem; set { if (!SetProperty(ref _selectedItem1, value) || value is null) return; SelectedItem = _selectedItem1; SelectedItem2 = null; } } DataGrid' SelectedItem xaml: SelectedItem="{Binding SelectedItem1, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" P.S. SetProperty() method is from CommunityToolkit.Mvvm which handles OnPropertyChanged event
I am trying to make non removable transparent watermark in Itext5, I was able to make the watermark non removable using the PdfPatternPainter,but still facing an issue that the watermark is not transparent much and it is still covering the content and makes it harder to read my code is as follows: public class TestWatermark { public static String resourcesPath = "C:\\Users\\java\\Desktop\\TestWaterMark\\"; public static String FILE_NAME = resourcesPath + "test.pdf"; public static void main(String[] args) throws IOException { System.out.println("########## STARTED ADDING WATERMARK ###########"); ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { byte[] byteArray = Files.readAllBytes(Paths.get(FILE_NAME)); String watermarkText = "confidential"; String fontPath = resourcesPath + "myCustomFont.ttf"; Font arabicFont = FontFactory.getFont(fontPath, BaseFont.IDENTITY_H, 16); BaseFont baseFont = arabicFont.getBaseFont(); PdfReader reader = new PdfReader(byteArray); PdfStamper stamper = new PdfStamper(reader, baos); int numberOfPages = reader.getNumberOfPages(); float height = baseFont.getAscentPoint(watermarkText, 24) + baseFont.getDescentPoint(watermarkText, 24); for (int i = 1; i <= numberOfPages; i++) { Rectangle pageSize = reader.getPageSizeWithRotation(i); PdfContentByte overContent = stamper.getOverContent(i); PdfPatternPainter bodyPainter = stamper.getOverContent(i).createPattern(pageSize.getWidth(), pageSize.getHeight()); BaseColor baseColor = new BaseColor(10, 10, 10); bodyPainter.setColorStroke(baseColor); bodyPainter.setColorFill(baseColor); bodyPainter.setLineWidth(0.85f); bodyPainter.setLineDash(0.2f, 0.2f, 0.2f); PdfGState state = new PdfGState(); state.setFillOpacity(0.01f); overContent.saveState(); overContent.setGState(state); for (float x = 70f; x < pageSize.getWidth(); x += height + 100) { for (float y = 90; y < pageSize.getHeight(); y += height + 100) { bodyPainter.beginText(); bodyPainter.setTextRenderingMode(PdfPatternPainter.TEXT_RENDER_MODE_FILL); bodyPainter.setFontAndSize(baseFont, 13); bodyPainter.showTextAlignedKerned(Element.ALIGN_MIDDLE, watermarkText, x, y, 45f); bodyPainter.endText(); overContent.setColorFill(new PatternColor(bodyPainter)); overContent.rectangle(pageSize.getLeft(), pageSize.getBottom(), pageSize.getWidth(), pageSize.getHeight()); overContent.fill(); } } overContent.restoreState(); } stamper.close(); reader.close(); byteArray = baos.toByteArray(); File outputFile = new File(resourcesPath + "output.pdf"); if (outputFile.exists()) { outputFile.delete(); } Files.write(outputFile.toPath(), byteArray); System.out.println("########## FINISHED ADDING WATERMARK ###########"); } catch (Exception e) { e.printStackTrace(); } } } Please advise how to make my watermark doesn't cover the text
|amazon-web-services|elasticsearch|snapshot|opensearch|amazon-opensearch|
null
I'm exporting multiple sheets to PDF with the following code: ThisWorkbook.Sheets(Array("3rd party", "Dashboard", "BS_3rd")).Select ActiveSheet.ExportAsFixedFormat Type:=xlTypePDF, fileName:=PDF_FileName, Quality:=xlQualityStandard, IncludeDocProperties:=True, IgnorePrintAreas:=False, OpenAfterPublish:=True The code works fine with any sheet except "Dashboard". That sheet in any combination (even on its own) causes a > Run-time error '1004': Application-defined or object-defined error on the second line. Selection is fine, export to PDF runs into this bug. Please advise.
As you can see I have two src/components (assets), one is in the root, the other in the /renderer. The root one was created when I added resizable components via "npx shadcn-vue@latest add resizable" (I think). Doesn't feel right. Somebody please explain how the paths should be in such configuration (Electron app using shadcn/vue) so that I can add shadcn components and properly import them. Or if you have some useful guiding links. [![The file folder explorer along with various .config files.][1]][1] [1]: https://i.stack.imgur.com/e2wvg.jpg
Electron with shadcn/vue config (src) confusion
|vue.js|electron|tailwind-css|shadcnui|electron-vue|
When using diffusers like ```from diffusers import AutoencoderKL, DDPMScheduler, UNet2DConditionModel``` getting the following error : ```ImportError: cannot import name 'DIFFUSERS_SLOW_IMPORT' from 'diffusers.utils' (/opt/conda/lib/python3.10/site-packages/diffusers/utils/__init__.py)``` Also tried downgrading the version to 0.26.3 , but didn't help. Diffusers version : **0.27.2** Any help is appreciated.
ImportError: cannot import name 'DIFFUSERS_SLOW_IMPORT' from 'diffusers.utils'
|machine-learning|artificial-intelligence|stable-diffusion|
**Approach A:** - In this approach, each line creates a new DataFrame, and the previous DataFrame is replaced with the new one. - This can lead to performance issues, especially with large datasets, due to the overhead of DataFrame creation and management - Successive withColumn statements can lead to performance issues in PySpark due to the immutability of DataFrames. **Approach B:** - Instead of using successive withColumn statements, you can use a single select statement to achieve the same result. - It performs the transformations in a single pass through the data. - Each operation in the chain is applied directly to the DataFrame returned by the preceding operation without creating intermediate DataFrames. Approach B is typically preferred for its better performance and cleaner, more concise code.
Please deploy your react code on netlify,Render etc. React code needs a server for running it. Github only serves static files; you can only host your pure HTML, Css and JS file on github. It working fine on your local machine because your local machine work like a server after running "npm run start"
|php|composer-php|version|cpanel|
null
{"Voters":[{"Id":1431720,"DisplayName":"Robert"},{"Id":354577,"DisplayName":"Chris"},{"Id":12632699,"DisplayName":"SwissCodeMen"}]}
Update your **geckodriver** and Firefox version and try again. hope this will work
This is my call ``` public IPage<Deployment> deploymentPage(Query query, String key, String name, String tenantId) { DeploymentQuery deploymentQuery = repositoryService.createDeploymentQuery(); if (StringUtil.isNotBlank(key)) { deploymentQuery.deploymentKeyLike(StringPool.PERCENT + key + StringPool.PERCENT); } if (StringUtil.isNotBlank(name)) { deploymentQuery.deploymentNameLike(StringPool.PERCENT + name + StringPool.PERCENT); } if (StringUtil.isNotBlank(tenantId)) { deploymentQuery.deploymentTenantId(tenantId); } IPage<Deployment> page = Condition.getPage(query); long count = deploymentQuery.count(); List<Deployment> deployments = deploymentQuery.orderByTenantId().orderByDeploymentTime().desc().listPage((query.getCurrent() - 1) * query.getSize(), query.getSize()); page.setRecords(deployments); page.setTotal(count); return page; } ``` I use `JSON` to return `IPage\<Deployment\>` Error: > Could not write JSON: (was java.lang.NullPointerException); nested > exception is com.fasterxml.jackson.databind.JsonMappingException: (was > java.lang.NullPointerException) (through reference chain: > org.springblade.core.tool.api.R[\"data\"]->com.baomidou.mybatisplus.extension.plugins.pagination.Page[\"records\"]->java.util.ArrayList[0]->org.flowable.engine.impl.persistence.entity.DeploymentEntityImpl[\"resources\"]) After locating, it was found that `CommandContextUtil.getResourceEntityManager().findResourcesByDeploymentId(id);` line `CommandContextUtil.getResourceEntityManager()` is null ``` @Override public Map<String, EngineResource> getResources() { if (resources == null && id != null) { List<ResourceEntity> resourcesList = CommandContextUtil.getResourceEntityManager().findResourcesByDeploymentId(id); resources = new HashMap<>(); for (ResourceEntity resource : resourcesList) { resources.put(resource.getName(), resource); } } return resources; } ```
In C#, you can't put statments (that are not variable declarations) in the class scope, so you can only access class members inside methods.
In my case, I set the exception handler to be called before `UseEndpoints`. If it comes after `UseEndpoints` the handler never gets called. ``` var app = builder.Build(); app.UseCors(); app.UseRouting(); app.UseAuthorization(); // should be called before UseEndpoints else the exception handler wont be called app.UseExceptionHandler(); app.UseEndpoints(endpoints => endpoints.MapControllers()); app.MapControllers(); app.Run(); ```
|php|wordpress|authentication|woocommerce|intersection|
null
I'm trying to enable the "Export selected" button in the Django admin for users to download data as an Excel sheet. I'm using django-import-export but the button isn't appearing. Here's what I've done: Installed django-import-export (pip install django-import-export). Trial 1: ``` class UserAdmin(ImportExportModelAdmin): list_display = ('username', 'email'....) admin.site.unregister(User) admin.site.register(User, ImportExportModelAdmin) ``` Trial 2: ``` class UserAdmin(ExportMixin, admin.ModelAdmin): list_display = ('username', 'email'.....) admin.site.unregister(User) admin.site.register(User, UserAdmin) ``` Restarted the development server. the django-import-export is in INSTALLED_APPS in settings.py Expected behavior: The "Export selected" button should appear in the Django admin user list view. Actual behavior: The button is not displayed. **My Question**: Why the button is not showing and how can I fix it. Any suggestions or insights into why the button might not be showing would be greatly appreciated.
Django Admin "Export selected" button not showing in Django Admin
I bought the pro version of swiperJS to get the Triple Slider (this thing : https://triple-slider.uiinitiative.com/) But it's vanilla JS and I need it in ReactTS, so I need help to pass it into ReactTS, if someone know how to find this ressource, It will be very nice ! Thanks by advance !
SwiperJS ReactTS Triple Slider
|reactjs|swiper.js|
As mentioned in the comments: yes, this documentation is imprecise at best. I think it is referring to the behavior between scalars of the same type: ```python3 import numpy a = numpy.uint32(4294967295) print(a.dtype) # uint32 a += np.uint32(1) # WILL wrap to 0 with warning print(a) # 0 print(a.dtype) # uint32 ``` The behavior of your example, however, will change due to [NEP 50][1]. So as frustrating as the old behavior is, there's not much to be done but wait, unless you want to file an issue about backporting a documentation change. As documented in the [Migration Guide][2]. > The largest backwards compatibility change of this is that it means that the precision of scalars is now preserved consistently... > `np.float32(3) + 3.` now returns a `float32` when it previously returned a `float64`. I've confirmed that in your example, the type is preserved as expected. ```python3 import numpy a = numpy.uint32(4294967295) print(a.dtype) # uint32 a += 1 # will wrap to 0 print(a) # 0 print(a.dtype) # uint32 numpy.__version__ # '2.1.0.dev0+git20240318.6059db1' ``` The second NumPy 2.0 release candidate is out, in case you'd like to try it: https://mail.python.org/archives/list/numpy-discussion@python.org/thread/EGXPH26NYW3YSOFHKPIW2WUH5IK2DC6J/ [1]: https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50 [2]: https://numpy.org/devdocs/numpy_2_0_migration_guide.html
Since the *secondary_id_code* is not readily available, it's likely that it is dynamically loaded onto the page, possibly via JavaScript. Websites like OddsPortal often use JavaScript to load data dynamically, which means that simply fetching the page's HTML might not reveal all the data that a browser would show to a user. Here's how to tackle this: # 1. Analyze Network Traffic - Use your browser's Developer Tools (usually accessible by pressing *F12* or right-clicking and selecting "Inspect") and go to the "Network" tab. - Refresh the page and watch for XHR (XMLHttpRequest) or Fetch requests that load after the initial page load. These requests often fetch dynamic content, such as your *secondary_id_code*. # 2. Use Selenium or a Similar Tool: - Since the *secondary_id_code* might be loaded dynamically, consider using Selenium, a tool that automates web browsers. Selenium can execute JavaScript just like a real browser, allowing you to access data that's loaded dynamically. - Here's a simplified approach to using Selenium to access the dynamic content: <pre><code> from selenium import webdriver # Path to your WebDriver (e.g., ChromeDriver) driver_path = '/path/to/your/chromedriver' # URL of the live matches page url = 'https://www.oddsportal.com/inplay-odds/live-now/football/' # Initialize the WebDriver and open the URL driver = webdriver.Chrome(executable_path=driver_path) driver.get(url) # You may need to wait for the page to load dynamically loaded content # For this, Selenium provides explicit and implicit waits # Now, you can search the DOM for the `secondary_id_code` as it would be rendered in a browser # For example, finding an element that contains the code, or observing AJAX requests that might contain it # This could involve analyzing the page's JavaScript or observing network requests, as mentioned earlier # Always remember to close the WebDriver driver.quit() </code></pre> # 3. Decoding the *secondary_id_code* - If you find the *secondary_id_code* but it's URL encoded (like *%79%6a%39%64%39*), you can decode it using Python's *urllib.parse.unquote()* function: <pre><code> from urllib.parse import unquote encoded_str = '%79%6a%39%64%39' decoded_str = unquote(encoded_str) print(decoded_str) # This will print the decoded string </code></pre>
|excel|vba|pdf|runtime-error|
{"Voters":[{"Id":7994837,"DisplayName":"Delta_G"},{"Id":3518383,"DisplayName":"Juraj"},{"Id":5468463,"DisplayName":"Vega"}]}
I'm trying to add items to a linked list, however if an node's data repeats instead of returning the new node it should return the original node with the same data. I have a test file along with the methods I'm testing. the trouble comes from my addOnce() method returning the node parsed through instead of the first occurrence. trying to avoid "Error: The initial item was not returned for " + result" line of code here's the test block of code, no editing required ``` import java.util.Iterator; public class OrderedListTest { static private class Courses implements Comparable<Courses>{ String rubric; int number; int occurance; public Courses(String rub, int num, int occ) { rubric = rub; number = num; occurance = occ; } public int compareTo(Courses other) { if (rubric.compareTo(other.rubric) < 0) return -1; else if (rubric.compareTo(other.rubric) > 0) return 1; else return number - other.number; } public String toString() { return rubric + " " + number; } } public static void main(String[] args) { Courses listOfCourses[] = { new Courses("COSC", 2436, 1), new Courses("ITSE", 2409, 1), new Courses("COSC", 1436, 1), new Courses("ITSY", 1300, 1), new Courses("ITSY", 1300, 2), new Courses("COSC", 1436, 2), new Courses("COSC", 2436, 2), new Courses("ITSE", 2417, 1), new Courses("ITNW", 2309, 1), new Courses("CPMT", 1403, 1), new Courses("CPMT", 1403, 2)}; OrderedAddOnce<Courses> orderedList = new OrderedAddOnce<Courses>(); Courses result; for (int i = 0; i < listOfCourses.length; i++){ result = orderedList.addOnce(listOfCourses[i]); if (result == null) System.out.println("Error: findOrAdd returned null for " + listOfCourses[i]); else { if (result.occurance != 1) System.out.println("Error: The initial item was not returned for " + result); if (result.compareTo(listOfCourses[i]) != 0) System.out.println("Error: " + listOfCourses[i] + " was passed to findOrAdd but " + result + " was returned"); } } Iterator<Courses> classIter = orderedList.iterator(); while(classIter.hasNext()) { System.out.println(classIter.next()); } // There should be 7 courses listed in order } } ``` Here's the code in need of debugging, specifically in the addOnce() method ``` import java.util.Iterator; import java.util.NoSuchElementException; /** * * @author User */ // Interface for COSC 2436 Labs 3 and 4 /** * @param <E> The class of the items in the ordered list */ interface AddOnce <E extends Comparable<? super E>> { /** * This method searches the list for a previously added * object, if an object is found that is equal (according to the * compareTo method) to the current object, the object that was * already in the list is returned, if not the new object is * added, in order, to the list. * * @param an object to search for and add if not already present * * @return either item or an equal object that is already on the * list */ public E addOnce(E item); } //generic linked list public class OrderedAddOnce<E extends Comparable<? super E>> implements Iterable<E>, AddOnce<E> { private Node<E> firstNode; public OrderedAddOnce() { this.firstNode = null; } @Override public E addOnce(E item) { Node<E> current; if (firstNode == null || item.compareTo(firstNode.data) <= 0) { Node<E> newNode = new Node<>(item); newNode.next = firstNode; firstNode = newNode; return firstNode.data; } current = firstNode; while (current.next != null && item.compareTo(current.next.data) > 0) { current = current.next; } Node<E> newNode = new Node<>(item); newNode.next = current.next; current.next = newNode; return newNode.data; } @Override public Iterator iterator() { return new AddOnceIterator(); } private class AddOnceIterator implements Iterator{ private Node<E> currentNode = firstNode; @Override public boolean hasNext() { return currentNode != null; } @Override public E next() { if (!hasNext()) { throw new NoSuchElementException(); } E data = currentNode.data; currentNode = currentNode.next; return data; } } private class Node<E> { public E data; public Node<E> next; public Node(E intialData){ this.data = intialData; this.next = null; } } } ``` I've tried swapping the return statement values from newNode.data, current.data, item. but it returns the same error messages regardless ``` Error: The initial item was not returned for ITSY 1300 Error: The initial item was not returned for COSC 1436 Error: The initial item was not returned for COSC 2436 Error: The initial item was not returned for CPMT 1403 COSC 1436 COSC 1436 COSC 2436 COSC 2436 CPMT 1403 CPMT 1403 ITNW 2309 ITSE 2409 ITSE 2417 ITSY 1300 ITSY 1300 ``` I'm expecting the list without duplicates
{"Voters":[{"Id":18519921,"DisplayName":"wohlstad"},{"Id":10871073,"DisplayName":"Adrian Mole"},{"Id":4832499,"DisplayName":"Passer By"}]}
I have an app that looks like this when p-values is not selected: [![enter image description here][1]][1] And when p-values is selected, the y-axis is expanded so that the asterisks don't get cut off: [![enter image description here][2]][2] The line of code for this is: {if(input$p_values) expand_limits(y= c(0, new_est_y *1.05))} + **but I want to include the condition that the y-axis only be extended if the tallest bar requires asterisks above it, i.e. if !is.na(Sig) for max(new_est), for that specific category.** Is there a way in Shiny to convey: if(input$p_values) AND if(!is.na(Sig_y)) AND if(new_est_y ==max(new_est_y) then expand_limits? For example. In Number of Enterprises, the tallest bar is Lump Sum, and if Sig were NA, I would not want the y-axis to be extended: [![enter image description here][3]][3] My code is: cbPalette <- c("#E69F00", "#56B4E9", "#009E73") #color-blind friendly palette or "#CC79A7"? fun_select_cat <- function(table, cat) { table %>% filter(variable == cat) } ui <- fluidPage( sidebarLayout( sidebarPanel(selectInput('cat','Select Category', c('Number of Enterprises','Assets','Costs','Net Revenues','Revenues')), checkboxInput("control_mean",label = "Show average for non-recipients", value = FALSE), checkboxInput("p_values",label = "Show p-values", value = FALSE), actionButton("Explain_p_values", "Explain p-values")), mainPanel(plotOutput('plot_overall')) )) server <- function(input, output, session) { observeEvent(input$Explain_p_values, {showModal(modalDialog(p_value_text))}) output$plot_overall <- renderPlot({ control_y = fun_select_cat(table_2, input$cat) %>% pull(Control) Sig_height_y = fun_select_cat(table_2, input$cat) %>% pull(Sig_height) Sig_y = fun_select_cat(table_2, input$cat) %>% pull(Sig) new_est_y = fun_select_cat(table_2, input$cat) %>% pull(new_est) fun_select_cat(table_2, input$cat) %>% ggplot(aes(x = Treatment, y = new_est, fill = Treatment)) + geom_col() + scale_fill_manual(values = cbPalette) + guides(fill = FALSE) + scale_y_continuous(labels = label_comma(), expand = c(0,0)) + theme_classic() + scale_x_discrete(drop=FALSE) + theme(plot.title = element_text(hjust=0.5, size=14,face="bold"), axis.text=element_text(size=12)) + {if(input$p_values) geom_text(aes(label = Sig_y), y = Sig_height_y)} + {if(input$p_values) expand_limits(y= c(0, new_est_y *1.05))} + {if(input$control_mean) annotate("text", x = 3.6, y = 1.078 * control_y, label = "Control\nmean", colour = "#CC79A7", fontface =2, size = 4.5)} + {if(input$control_mean)expand_limits(x= c(1, length(levels(table_2$Treatment)) + 0.75))} + {if(input$control_mean) geom_hline(aes(yintercept = Control), linetype='dashed', col = '#CC79A7', size = 1.5)} + if(input$cat %in% c("Number of Enterprises", "Assets") ) {labs(title= input$cat, x = NULL, y = NULL) } else{labs(title = paste(input$cat, "(USD) for the last 30 days", sep =" "), x = NULL, y = NULL)} }) } shinyApp(ui = ui, server = server) [1]: https://i.stack.imgur.com/cixc5.png [2]: https://i.stack.imgur.com/TetTa.png [3]: https://i.stack.imgur.com/ImgTp.png dput(table_2) structure(list(Treatment = structure(c(1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 2L, 2L, 2L, 2L, 2L), levels = c("Long Term", "Short Term", "Lump Sum"), class = "factor"), variable = c("Number of Enterprises", "Assets", "Costs", "Net Revenues", "Revenues", "Number of Enterprises", "Assets", "Costs", "Net Revenues", "Revenues", "Number of Enterprises", "Assets", "Costs", "Net Revenues", "Revenues"), Control = c(73.23, 100036.59, 92636.84, 54533.59, 150207.24, 73.23, 100036.59, 92636.84, 54533.59, 150207.24, 73.23, 100036.59, 92636.84, 54533.59, 150207.24 ), Estimate = c(9.93, 36050.66, 32055.29, 28226.05, 61379.4, 14.67, 29404.54, 71903.23, 35576.39, 107746.75, 3.39, 16441.81, 8497.42, 14824.71, 23177.47), SE = c(3.96, 12589.11, 16478.13, 12334.27, 24346.05, 3.92, 10977.68, 24360.84, 13382.81, 34895.03, 3.57, 10029.27, 10462.44, 8143.69, 16080.92), Sig = c("�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", "�\u0088\u0097", "�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", "�\u0088\u0097�\u0088\u0097�\u0088\u0097", NA, NA, NA, "�\u0088\u0097", NA), new_est = c(83.16, 136087.25, 124692.13, 82759.64, 211586.64, 87.9, 129441.13, 164540.07, 90109.98, 257953.99, 76.62, 116478.4, 101134.26, 69358.3, 173384.71), lower = c(75.3984, 111412.5944, 92394.9952, 58584.4708, 163868.382, 80.2168, 107924.8772, 116792.8236, 63879.6724, 189559.7312, 69.6228, 96821.0308, 80627.8776, 53396.6676, 141866.1068), higher = c(90.9216, 160761.9056, 156989.2648, 106934.8092, 259304.898, 95.5832, 150957.3828, 212287.3164, 116340.2876, 326348.2488, 83.6172, 136135.7692, 121640.6424, 85319.9324, 204903.3132 ), Sig_height = c(87.318, 142891.6125, 130926.7365, 86897.622, 222165.972, 92.295, 135913.1865, 172767.0735, 94615.479, 270851.6895, 80.451, 122302.32, 106190.973, 72826.215, 182053.9455)), class = c("grouped_df", "tbl_df", "tbl", "data.frame"), row.names = c(NA, -15L), groups = structure(list( variable = c("Assets", "Costs", "Net Revenues", "Number of Enterprises", "Revenues"), .rows = structure(list(c(2L, 7L, 12L), c(3L, 8L, 13L), c(4L, 9L, 14L), c(1L, 6L, 11L), c(5L, 10L, 15L)), ptype = integer(0), class = c("vctrs_list_of", "vctrs_vctr", "list"))), class = c("tbl_df", "tbl", "data.frame" ), row.names = c(NA, -5L), .drop = TRUE))
Increase y-axis height to fit in geom_text labels according to multiple criteria
I am trying to figure out how a server handles multiple concurrent requests and each client receive the response to their own request. I did read a few answers online but those confused me even more. From what I understand a client establishes a TCP connection with the server using a ```3-way handshake```, then the client sends a request, and for each request the server gets a thread from the ```threadpool```, the ```ethernet frame is deencapsulated``` to get the request data, each request gets independently processed within its thread, and the response is sent back to the client. So, is this process correct; if yes, how does the server know which IP to send the response back to since that info gets lost during ```deencapsulation```? And what if the server is ```single threaded like nodejs```, does each request get its response sequentially? If not, why don't the responses get mixed up? It would be really helpful if someone could resolve these doubts. Thanks!
How does a server handle multiple requests, and how does is know where to send which response?
|node.js|networking|server|tcp|backend|
I am making a copy of a website by using HT tracker but aftersome of processing it is showing error log, Is anybody know how can i solve it. And I am not a well cooder so please suggest me the easy method. [![THis is the screenshot of HT Tracker][1]][1] I am search on Google but could not found the better method. [1]: https://i.stack.imgur.com/6iiUA.png
How to solve HT Tracker error log problem?
|mongodb|alfresco-webscripts|best-fit|
null
I'm (unfortunately) suspecting that what I'd like to do is impossible, so I'm very open to best-practice workarounds. In short, I've got a setup like the following: ```rust pub struct OffsetMod<'a> { kind: OffsetKind, composition: ChemicalComposition<'a>, } ``` Most of the time, that struct should indeed be the owner of that `ChemicalComposition`, but sometimes this data is stored differently, and the struct is peeled apart into something approximating `HashMap<ChemicalComposition<'a>, OffsetKind>`, then I have a method that iterates over this `HashMap` and returns a series of reconstituted `OffsetMod<'a>`s. Well, sorta `OffsetMod<'a>`s... Obviously the only way I could get those owned `ChemicalComposition`s back (without some very expensive `.clone()`ing) is to drain / move the `HashMap`, but that's not something I want to do. Downstream, I actually only need a reference to those `ChemicalCompositions`, so really what I want to return is a sort of borrowed `OffsetMod<'a>` — something like: ```rust pub struct BorrowedOffsetMod<'a> { kind: OffsetKind, composition: &'a ChemicalComposition<'a>, } ``` The issue, however, with creating that second struct, is that I'd now have to duplicate all of the methods / trait impls I have for `OffsetMod` for `BorrowedOffsetMod`! Elsewhere in my code, when encountering a similar case of needing a struct to come in a form that borrows its contents, and another that doesn't, I wrote something like this: ```rust pub struct Target<S = String> { pub group: S, pub location: Option<S>, pub residue: Option<S>, } ``` By default, it owns its data (`String`s, in this case), but my impl blocks are written as: ```rust impl<S: Borrow<str>> Target<S> { // — snip — } ``` In this way, I can typically assume that `Target` owns its data, but I can also "borrow" `Target` as `Target<&'a str>`, and all of the same methods will work fine because of the `Borrow<str>` bound. **Now here comes my very troubling issue:** If I try the same with my `OffsetMod` struct, the pain begins: ```rust pub struct OffsetMod<'a, C = ChemicalComposition<'a>> { kind: OffsetKind, composition: C, } ``` Which hits me with the ever-awful ``` error[E0392]: parameter `'a` is never used help: consider removing `'a`, referring to it in a field, or using a marker such as `PhantomData` ``` Now, to open with the "best" solution I've found so far, I'm a bit perplexed at why this is so easy and works fine: ```rust pub type OffsetMod<'a> = OffsetModInner<ChemicalComposition<'a>>; struct OffsetModInner<C> { kind: OffsetKind, composition: C, } ``` When that feels quite a lot like what I'd expect `OffsetMod<'a, C = ChemicalComposition<'a>>` to do behind the scenes... The reason this solution isn't quite cutting it for me, is that it now "splits" my type in two: my parameters may be able to use `OffsetMod<'a>` just fine, but for the borrowed version, I can't do `OffsetMod<&ChemicalComposition<'a>>`, but I need to use that second "hidden" `OffsetModInner<&ChemicalComposition<'a>>` or make another alias like `BorrowedOffsetMod<'a>`. Whilst this is the best I can land on, it still feels a bit messy. Now, I understand that Rust wants this `'a` bound to show up in a field somewhere, so that if you're _not_ using the default type, it still knows how to populate that lifetime. Perhaps we could claim that `C` always lives as long as `'a`, since `C` always shows up in the `Composition` field. ```rust pub struct OffsetMod<'a, C: 'a = ChemicalComposition<'a>> { kind: OffsetKind, composition: C, } ``` But no dice there either — same error as before. Though I've tried a multitude of other things, I've not found one I'm pleased with: 1. `PhantomData` looks and feels hacky, and I'm not certain if it's semantically correct either? I don't know, perhaps I'm more open to this one than I thought... 2. Something like `Cow<'a, ChemicalComposition<'a>>` feels a bit nasty and adds runtime overhead where there really doesn't need to be. I'll only ever need references (no need to copy), it just that these `OffsetMod` structs are sometimes the only place the underlying data has to live (e.g. for `ChemicalComposition`s that _aren't_ stored in that `HashMap`)! 3. I tried some unholy GAT stuff with a `Borrow`esque trait that looked like this: ```rust pub trait NestedBorrow<T> { type Content<'a> where Self: 'a; fn borrow(&self) -> &T; } impl<T> NestedBorrow<T> for T { type Content<'a> = T where Self: 'a; fn borrow(&self) -> &T { self } } impl<T> NestedBorrow<T> for &T { type Content<'a> = T where Self: 'a; fn borrow(&self) -> &T { &**self } } #[derive(Clone, PartialEq, Eq, Hash, Debug, Serialize)] pub struct OffsetMod<'a, C: NestedBorrow<ChemicalComposition<'a>> + 'a = ChemicalComposition<'a>> { kind: OffsetKind, composition: C::Content<'a>, } ``` But that leaks `NestedBorrow` into the public API for `OffsetMod` (see https://stackoverflow.com/a/66369912), caused problems with `&ChemicalComposition<'_>` not being accepted by `OffsetKind` — it was always looking for the owned `ChemicalComposition<'_>` version, something I never figured out — and it's generally revolting. What do you think, is it possible to do better than the `type` aliases? Is there some Rust pattern that's better suited for structs that sometimes own, and sometimes borrow data — whilst keeping my `impl` blocks unified with `impl<T: Borrow<ChemicalComposition<'a>>> OffsetMod<'a, T>`?
Default type parameters on Rust structs: is it possible to provide a default type containing a lifetime?
|generics|rust|borrow-checker|higher-kinded-types|generic-associated-types|
null
Error generating libspec: Importing library 'AppiumLibrary' failed: ModuleNotFoundError: No module named 'appium.webdriver.common.touch_action' Consider adding the needed paths to the "robot.pythonpath" setting and calling the "Robot Framework: Clear caches and restart" action.robotframework I try all solutions with my information such - Appium version: 2.0.1 - Python version: 3.9.6 - Robotframework version: 7.0 - robotframework-appiumlibrary version: 2.0.0
This is the fix I used for my project. the code was more than this but the idea was that I wanted to draw on an office plan as a navigating app for workers and I finished the project and it ran perfectly on Android, but did not run on iOS, so I used the `navigator.platform` method, then set the canvas size based on the platform if (/iPad|iPhone|iPod|MacIntel/.test(navigator.platform)) { const maxWidth = 5400; // Maximum width canvas.width = Math.min(maxWidth, document.body.getBoundingClientRect().width || window.innerWidth || screen.width); const aspectRatio = image.width / image.height; canvas.height = canvas.width / aspectRatio; document.querySelector('.devicename').innerHTML = `${navigator.platform}\s width ${canvas.width} height: ${canvas.height}\s image height: ${image.height} image width: ${image.width}\s device height: ${window.innerHeight || screen.height} width : ${window.innerWidth || screen.width}`; } ctx.drawImage(image, 0, 0, canvas.width, canvas.height);
Found the problem. The `CreatePowerPointXDDFChart.pptx` created by code above opens properly using up to Microsoft Office 2021. And it opens properly using Microsoft Office 365 when created using Apache POI up to version 4.1.0. The problem is that Apache POI from version 4.1.2 decided to set number format settings to the category axis per default. But it sets an empty string as the number format code. Microsoft Office up to version 2021 is tolerant enough to ignore that empty string number format. Microsoft Office 365 is not as tolerant. Not clear why, of all things, the category axis has that setting by default. The value axis has not. But to make it work again, one must repair that incorrect number format setting. In code above `bottomAxis` is the category axis. That is text and not numbers. So the number format should be `@`, which is text. The `leftAxis` is a value axis. This could have number format `#,##0.00` if set. So changes to code above: ... //repair axes number format settings if (bottomAxis.hasNumberFormat()) bottomAxis.setNumberFormat("@"); if (leftAxis.hasNumberFormat()) leftAxis.setNumberFormat("#,##0.00"); ... _______________________ Edit Feb 2024: Microsoft Office 365 Version 2402 now ignores that empty string number format again without throwing errors while opening the office file. And current Apache POI version 5.2.5 now does not use empty strings as number format code. Thus the workaround is not needed anymore. But of course explicit setting the number format codes is not wrong at all. Thus there is no need to remove the code lines.
I was also facing the same error when I was trying to upload numpy , tensorflow and other libraries for my training task, I wasnt able to solve the problem so what I did was I just ran the pip install commands using bash operator as a task in my dag. Here is the code: Import sys Import subprocess @task def installing_dependencies_using_subprocess(): subprocess.check_call( [sys.executable, "-m", "pip", "install", "numpy"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "pandas"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "seaborn"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "scikit-learn"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "tensorflow"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "joblib"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "matplotlib"]) subprocess.check_call( [sys.executable, "-m", "pip", "install", "s3fs"]) return {"message": "Dependencies installed successfully"}
I'm trying to access datasets in PBI, but I can't get the correct permissions I created an application in Microsoft Entra ID To get a token I make a request: ```http POST /{{tenantId}}/oauth2/v2.0/token HTTP/1.1 Host: login.microsoftonline.com Content-Type: application/x-www-form-urlencoded Content-Length: ... client_id={{clientId}} &grant_type=client_credentials &scope=https://analysis.windows.net/powerbi/api/.default openid profile offline_access &client_secret={{clientSecret}} ``` I receive a response with a token: ```json { "token_type": "Bearer", "expires_in": 3599, "ext_expires_in": 3599, "access_token": "ey...Cw" } ``` After this, I make a request to get dataset: ```http GET /v1.0/myorg/groups/{{workspaceId}}/datasets/{{datasetId}} HTTP/1.1 Host: api.powerbi.com Authorization: Bearer ey...Cw ``` And I receive a 401 response: ```json { "error": { "code": "PowerBINotAuthorizedException", "pbi.error": { "code": "PowerBINotAuthorizedException", "parameters": {}, "details": [], "exceptionCulprit": 1 } } } ``` Following permissions are specified for the app in Microsoft Entra ID: [Microsoft Entra ID App API Permissions](https://i.stack.imgur.com/xLi3w.png)
``` export const writePost = async (post: Post) => { const { content, title, writer, password } = post; const res = await fetch('http://localhost:3001/posts', { headers: { 'Content-Type': 'application/json' }, method: 'POST', body: JSON.stringify({ title, content, writer, password, createdDt: new Date().toISOString(), hits: 0, } as Post), }); const data = (await res.json()) as Post; if (data.id) { revalidatePath('/') // Doesn't work. return { success: true }; } return { success: false }; }; ``` When you click a button in the client component, it sends a POST request to the server, like this What I want is to stop caching when that request succeeds, like revalidatePath, and fetch the new posts from the server when it is moved to the path of "/" via useRouter.push, but it is fetching the old values. How can we get the new values?
How to use revalidatePath in the nextjs client component
|next.js|
[![enter image description here][1]][1]I am trying to make a measurement using the CAEN DT5742 16-channel digitizer using the library [CAENPy](https://github.com/SengerM/CAENpy/blob/main/CAENpy/CAENDigitizer.py) which is basically just a wrapper around the actual [CAENDigitizer](https://www.caen.it/products/caendigitizer-library/). My program scans an area with a laser using stepper motors and reads out the data coming from an analog readout board via the digitizer. It's been working more or less well, but I noticed that my program randomly becomes unresponsive and 2 processes (my program is multiprocessed) draw 100% CPU (single core). The code I use for acquiring data with the digitizer: ```python def read_and_save_events(self, max_num_events: int = 1): """Reads a specified number of events from the digitizer. Arguments --------- max_num_events: int, default 1 Number of events to read. Returns ------- nevts: int Number of events read. """ nevts: int = 0 data = [] retries = 0 while retries < MAX_RETRIES: retries += 1 try: with self.device: self.log.info("Reading %d events...", max_num_events) while nevts < max_num_events: time.sleep(0.05) waveforms = self.get_waveforms() current_nevts = len(waveforms) nevts += current_nevts data += waveforms self.log.info( "Read %d out of %d events...", nevts, max_num_events ) break except RuntimeError: self.log.error("Encountered error during read. Retrying...") self.hard_reset(self._device_id) self.close() self.device = CAEN_DT5742_Digitizer(self._device_id) self.init() time.sleep(RETRY_TIMEOUT) else: self.log.error("Too many retries, aborting read...") if self._save_path is None: self.log.warning("No save path specified, waveforms not saved!") return 0 # Disentangle data and save to file df = pd.DataFrame(data) timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S") data_file = os.path.join(self._save_path, f"waveforms_{timestamp}.h5") self.curr_savefile = data_file with pd.HDFStore(data_file, "w") as store: for channel in df.columns: channel_df = [] for eventid, event in enumerate(df[channel]): event_df = pd.DataFrame(event) for column in event_df.columns: col = pd.Series( event_df[column].values, name=f"{eventid}_{column.split()[0]}", ) channel_df.append(col) channel_df = pd.concat(channel_df, axis=1) store.put(channel, channel_df) ``` I profiled the program using `py-spy` and got the attached call stack for one of the heavy duty processes. So, apparently the problem is with the `_GetNumEvents` method from the library. My question: Can I even solve this bug? If not, how would I monitor my program to get out of this error state? [1]: https://i.stack.imgur.com/jS7YF.jpg
Unresolved library: AppiumLibrary
|appium|robotframework|python-appium|
null
In all honest you can use whatever method works best for collision detection, in the past for how I've handled collision (in my case it was for enemies when doing melee damage in VR), I've used trigger/colliders and such with all different sizes. For the performance side of things, at least from my experience, collision and trigger colliders don't fire off every frame so they don't cause any hiccups on that front. If you are doing something where you are doing a one-off check if an enemy or object is close to a unit, then it shouldn't be too bad. You can do things to mitigate what colliders or triggers with what if you play around with layers, having specific objects collide/trigger will help it mitigate condition checks for example.
For .NET 8, and building on Jimminybob's answer: if you do want to use `Logger<SomeClass>`, add the two lines in Program.cs as explained above, var builder = WebApplication.CreateBuilder(args) ... services.AddSingleton<ILoggerFactory, LoggerFactory>(); services.AddSingleton(typeof(ILogger<>), typeof(Logger<>)); ... var app = builder.Build(); and update the constructors of your controllers/services/repositories: using Microsoft.Extensions.Logging; public AuthController( IConfiguration configuration, ILogger<AuthController> logger, ILoggerFactory loggerFactory, IActionContextAccessor actionContext) { _logger = logger; someRepository = new SomeRepository(arg1, new Logger<SomeRepository>(loggerFactory)); someService = new SomeService(arg1, someRepository, new Logger<SomeService>(loggerFactory)); etc; } then in the declaration of SomeRepository and SomeService: public class SomeRepository : ISomeRepository { private readonly ILogger _logger; public SomeRepository(string arg1, ILogger<SomeRepository> logger) { _logger = logger; etc... } } **The exact error message that was being generated**: *System.AggregateException: 'Some services are not able to be constructed (Error while validating the service descriptor 'ServiceType: SomeNamespace.Server.Services.IAuthService Lifetime: Scoped ImplementationType: SomeNamespace.Server.Services.AuthService': Unable to resolve service for type 'Microsoft.Extensions.Logging.ILogger' while attempting to activate 'SomeNamespace.Server.Services.AuthService'.)'*
> Is there a way to view traffic logs for Azure Storage for connections that got blocked by Firewall settings from Networking pane? To check the traffic logs for `Azure Storage` and see the connections blocked by **Firewall** settings in **Networking**, you can follow the steps below. For testing, I have disabled **Public network access** in storage account, then when I try to access blob, the firewall is blocking the connection. ![enter image description here](https://i.imgur.com/GcA2yCr.png) 1. Enable diagnostic settings, skip this step if are already set up. 2. Go to **Insights > Failures.** ![enter image description here](https://i.imgur.com/QXDNCQ8.png) 3. Here you can filter the log, if any traffic blocked from networking for storage transactions. ![enter image description here](https://i.imgur.com/nP8A0b7.png) 5. **KQL** query to check the view traffic logs for azure storage for connections that got blocked by firewall. ```kql let serviceValues = dynamic(['blob']); let operationValues = dynamic(['*']); let statusValues = dynamic(['AuthorizationFailure']); StorageBlobLogs | union StorageQueueLogs | union StorageTableLogs | union StorageFileLogs | where StatusText != "Success" | where "*" in ('blob') or ServiceType in ('blob') | where "*" in ('*') or OperationName in ('*') | where "*" in ('AuthorizationFailure') or StatusText in ('AuthorizationFailure') | extend Service = ServiceType | extend AuthType = AuthenticationType | extend CallerIpAddress = split(CallerIpAddress, ":")[0] | summarize ErrorCount = count() by Service, OperationName, StatusText, StatusCode, AuthType, tostring(CallerIpAddress), Uri | sort by ErrorCount desc ``` **Output:** ![enter image description here](https://i.imgur.com/LCiB14A.png)
Issue has been asked also in the Github repo. There the full conversation can be found and the solution. https://github.com/recharts/recharts/discussions/4268
I'm on a school windows computer where I've downloaded Anaconda and am using Python. I'm trying to read in a .csv file. The file definitely opens/reads in on my mac, and is sitting just on the desktop. It's 167MB, so not huge. The encoding is UTF 8 sig. I'm trying to load it using this pathway: df = pd.read\_csv('\storage-universityname\student$\studentusername1\Desktop\filename.csv', encoding='utf-8-sig', dtype=str) Everytime, I get this error message: > FileNotFoundError: \[Errno 2\] No such file or directory: > '\storage-universityname\student$\studentusername1\Desktop\filename.csv' &#x200B; &#x200B; I've tried editing the pathway to a combination of things, like changing all the backslashes to forward slashes, and combinations of pathways like: \student$\studentusername1\Desktop\filename.csv \studentusername1\Desktop\filename.csv \Desktop\filename.csv C:\\storage-universityname\student$\studentusername1\Desktop\filename.csv I've even tried using this script (can't remember now) that tells me what the pathway to this file is, and used that pathway, but it still gives me that error message. It's in the right encoding. Ive read in files using this script plenty of times on my mac in python. I'm familiar with how to do it. Could it be that my university computer is a weird web of stuent users, so it's different to read in from? Does anyone know how I can read in this file?
null
I'm a bit confused about when to use both revalidatePath and redirect in combination. Some actions seem to require redirection while others don't. Is there any criteria that can help me determine when to use both or just stick with revalidatePath? Since I have written some server actions, if I don't use the redirect after revalidating the path, the page will still display outdated data.
Nextjs14, redict and revalidatePath in server action
|reactjs|next.js|