instruction
stringlengths 0
30k
⌀ |
---|
|postgresql|java-time| |
|material-ui| |
I currently have a Power Automate flow that inserts a record in the Contacts table of our Dynamics CRM when a Microsoft Form is filled out. The fields in this form consists mainly of the first / last name, organization and company. It was created a couple of years ago, but started failing when the owner of the flow left our company. I've been tasked to look into it.
When I did, I found that it was using a deprecated Dynamics 365 connector and recommended I use a Dataverse connector to insert the record. I did this, but noticed there's an additional field named "Party Number" in the contact table and it's required.
[![enter image description here][1]][1]
I had a look at the dynamics CRM and initially I didn't see a Party Number in the field, but when I edited the columns in the Contacts list view, I found there is a column named "Party Number":
[![enter image description here][2]][2]
I added this column to the view to see what values are stored. They appear to be auto-generated data:
[![enter image description here][3]][3]
I was wondering if you know where this field is from? I searched for information about this and found this link, which mention dual-write party model:
Dataverse customers, vendors and contacts with dual-write party model and global address book app (P...
I'm not sure if this is where it's from or whether there's a way to check if this is installed in our environment. Because this field is required and the Microsoft Form doesn't have this field, I can't update the flow and am not sure if there's a way to not make the Party Number required or enter some dummy value.
Jason
[1]: https://i.stack.imgur.com/JPOwE.jpg
[2]: https://i.stack.imgur.com/rGZyD.jpg
[3]: https://i.stack.imgur.com/gZ0Xm.jpg |
Dynamics 365: Where does the "Party Number" field in the Contacts table come from? |
|dynamics-crm|microsoft-dynamics|power-automate| |
I'm using wp-now for 2 weeks and I'm loving it! It's quite handy! Buuuuuttttt today I got this weird crash. I already tried to reinstall and rollback and still the same issue.
Starting the server......
directory: /home/abc/sandbox/wp-content/plugins/team-members-block
mode: plugin
php: 8.0
wp: latest
WordPress latest folder already exists. Skipping download.
SQLite folder already exists. Skipping download.
Error: Could not mount /home/abc/.wp-now/sqlite-database-integration-main/db.copy: There is no such file or directory OR the parent directory does not exist.
at descriptor.value (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:68800:17)
at mountSqlitePlugin (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:768:9)
at runPluginOrThemeMode (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:698:3)
at async file:///home/zagaz/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:580:9
at async applyToInstances (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:520:5)
at async startWPNow (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:568:3)
at async startServer (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:866:42)
at async Object.handler (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:1058:25) {
[cause]: Error
at Object.ensureErrnoError (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:27717:33)
at Object.staticInit (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:27725:10)
at Object.init4 (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:31666:6)
at loadPHPRuntime (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:68816:38)
at doLoad (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:69879:31)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async _NodePHP.load (/home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/node_modules/@php-wasm/node/index.cjs:69846:12)
at async startWPNow (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:534:7)
at async startServer (file:///home/abc/.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:866:42)
at async Object.handler (file:///home//.nvm/versions/node/v20.9.0/lib/node_modules/@wp-now/wp-now/main.js:1058:25) {
node: undefined,
setErrno: [Function (anonymous)],
errno: 44,
message: 'FS error'
}
}
Failed to start the server: Could not mount /home/abc/.wp-now/sqlite-database-integration-main/db.copy: There is no such file or directory OR the parent directory does not exist.
ss |
After updating to version 5.10.2 as suggested by Filip I was getting an `PackageReferenceId` error which later on was addressed by creating new protected containers first then adding the data instead of just passing arrays to them. The code below is a little more detailed than what is available in github:
```php
require_once(__DIR__ . '/vendor/autoload.php');
use SellingPartnerApi\Api\OrdersV0Api;
use SellingPartnerApi\Model\OrdersV0\ConfirmShipmentRequest;
use SellingPartnerApi\Model\OrdersV0\PackageDetail;
use SellingPartnerApi\Model\OrdersV0\ConfirmShipmentOrderItem;
$order_id = 'your_order_id_here';
$order_items_list = $this->fn_GetOrderItems($order_id);
$order_item = $order_items_list->getPayload()->getOrderItems();
$shipment_items = new ConfirmShipmentOrderItem();
foreach ($order_item as $item) {
if( $item['quantity_shipped']> 0) {
echo "Items in order " . $order_id . " already marked shipped";
return;
} else {
$shipment_items->setOrderItemId($item->getOrderItemId());
$shipment_items->setQuantity($item->getQuantityOrdered());
}
}
$package_details = new PackageDetail();
$package_details->setCarrierCode('your_carrier_code');
$package_details->setCarrierName('your_shipping_carrier');
$package_details->setTrackingNumber('your_tracking_number');
$package_details->setShippingMethod('your_shipping_method');
$package_details->setPackageReferenceId('1');
$package_details->setShipDate(gmdate('Y-m-d\TH:i:s\Z'),strtotime('your_ship_date'));
$package_details->setOrderItems([$shipment_items]); //must be in brackets to make it an array
$payload = new ConfirmShipmentRequest();
$payload ->setPackageDetail($package_details);
$payload ->setMarketplaceId('your_market_placeid');
$apiInstance = new OrdersV0Api($this->config); //your config settings
try {
$apiInstance->confirmShipment($order_id, $payload);
echo "Order " . $order_id . " Fulfilled";
return;
} catch (Exception $e) {
echo $order_id . ' Exception when calling OrdersV0Api->confirmShipment: ', $e->getMessage(), PHP_EOL;
return;
}
``` |
null |
With a plugin, on the order preview (by clicking on the little eye ^^), I would like to add "the line number" or "order item" for each of the products in the invoice.
[order-preview](https://i.stack.imgur.com/dlkA1.png)
After several hook attempts, I couldn't get anything :(
I ty without result
I would like to display the information above the product name in the modal
Here are several unsuccessful tests that I did
preview
<?php
function custom_wc_order_item_display($product, $item, $item_id) {
$product_id = $product->get_product_id();
$product = wc_get_product($product_id);
echo '<td class="product-custom-info">';
echo '<strong>' . __('Numéro de série', 'woocommerce') . ':</strong> ' . $product->get_meta('serial_number');
echo '</td>';
}
add_action('woocommerce_admin_order_item_values', 'custom_wc_order_item_display', 10, 3);
or
<?php
function custom_wc_order_item_display($product, $item, $item_id) {
$product_id = $product->get_product_id();
$product = wc_get_product($product_id);
echo '<div class="custom-order-item-info"><strong>' . esc_html__('Numéro de série', 'woocommerce') . ':</strong> ' . $product->get_meta('serial_number') . '</div>';
}
add_action('woocommerce_admin_order_item_values', 'custom_wc_order_item_display', 10, 3);
or
<?php
function custom_wc_order_item_values($product, $item, $item_id) {
$product_id = $product->get_product_id();
$product = wc_get_product($product_id);
echo '<td class="product-custom-info" data-title="' . esc_attr__('Custom Info', 'woocommerce') . '">';
echo '<strong>' . esc_html__('Numéro de série', 'woocommerce') . ':</strong> ' . $product->get_meta('serial_number');
echo '</td>';
}
add_action('woocommerce_admin_order_item_values', 'custom_wc_order_item_values', 10, 3);
or
<?php
function custom_wc_order_item_values($product, $item, $order) {
$product_id = $product->get_product_id();
$product = wc_get_product($product_id);
echo '<td class="custom-order-item-info">';
echo '<strong>Numéro de série :</strong> ' . $product->get_meta('serial_number');
echo '</td>';
}
add_action('woocommerce_admin_order_item_values', 'custom_wc_order_item_values', 10, 3);
or
<?php
function custom_wc_order_preview_info($order) {
$product_id = $order->get_product_id();
$product = wc_get_product($product_id);
echo '<div class="custom-order-preview-info">';
echo '<h4>Informations supplémentaires :</h4>';
echo '<p>Numéro de série : ' . $product->get_meta('serial_number') . '</p>';
echo '</div>';
}
add_action('woocommerce_admin_order_preview_end', 'custom_wc_order_preview_info');
I don't know if it's clear but thank you in advance for your help. |
This can be handled in Angular on the component side instead of the html by using this routine.
- The reference to the collection is made
- A subscription to that reference is created.
- we then check each document to see if the field we are looking for is empty
- Once the empty field is found, we store that document id in a class variable.
First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice.
This component will contain these exported routines (see this post for that component firestore.service.ts:
https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore
In your component, import this service and make a
private firestore: Firestoreservice;
reference in your constructor.
Done!
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name
this.firestoreData.subscribe(firestoreData => {
for (let dID of firestoreData ) {
if (dID.field1 == this.ClassVariable1) {
if (dID.field2 == '') {
this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later
}
}
}
} );
<!-- end snippet -->
|
# Im trying to make a web browser but i cant find a good tutorial for bookmarks and tabs in my web browser called **RadonNet** im using PySide6 because i want my software to be open source
# i also use QWebEngineView which makes it harder to find tutorials for my browser
# Sure. Theres Bookmarks and Tabs Tutorials but they will not work with QWebEngineView
# Anything to solve this? The code is down there
# And i think stack overflow messed up the formatting of my code (my program works)
```
import sys
from PySide6.QtCore import QUrl
from PySide6.QtWidgets import QApplication, QMainWindow, QLineEdit, QToolBar
from PySide6.QtGui import QAction # Corrected import for QAction
from PySide6.QtWebEngineWidgets import QWebEngineView
class Browser(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle('RadonNet')
`your text` self.browser = QWebEngineView()
self.browser.setUrl(QUrl('http://www.google.com'))
# Navigation toolbar
self.nav_bar = QToolBar("Navigation")
self.addToolBar(self.nav_bar)
# Back button
back_btn = QAction('Back', self)
back_btn.triggered.connect(self.browser.back)
self.nav_bar.addAction(back_btn)
# Forward button
forward_btn = QAction('Forward', self)
forward_btn.triggered.connect(self.browser.forward)
self.nav_bar.addAction(forward_btn)
# Refresh button
refresh_btn = QAction('Refresh', self)
refresh_btn.triggered.connect(self.browser.reload)
self.nav_bar.addAction(refresh_btn)
# Home button
home_btn = QAction('Home', self)
home_btn.triggered.connect(self.navigate_home)
self.nav_bar.addAction(home_btn)
# URL bar
self.url_bar = QLineEdit()
self.url_bar.returnPressed.connect(self.navigate_to_url)
self.nav_bar.addWidget(self.url_bar)
self.setCentralWidget(self.browser)
def navigate_home(self):
self.browser.setUrl(QUrl('http://www.google.com'))
def navigate_to_url(self):
url = self.url_bar.text()
self.browser.setUrl(QUrl(url))
if __name__ == '__main__':
app = QApplication(sys.argv)
QApplication.setApplicationName('RadonNet')
window = Browser()
window.show()
sys.exit(app.exec_())
```
i tried tutorials but none of them are compatible with QWebEngineView
There are very few posts about bookmarks and tabs for PySides/QT WebEngineView
And none of them work.
So im forced to ask stack overflow |
Populate a multidimensional array while looping |
|php|loops|multidimensional-array| |
null |
So, i wanted to make a website to change the font of the text that you enter in, basically like lingojam.com.
I did make it but for some reason only some fonts are working and i have no idea why, can anyone help? sorry for the bad code, i am new to javascript, svelte, html etc
```javascript
let fonts = {
1: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"ᴀʙᴄᴅᴇғɢʜɪᴊᴋʟᴍɴᴏᴘǫʀsᴛᴜᴠᴡxʏᴢᴀʙᴄᴅᴇғɢʜɪᴊᴋʟᴍɴᴏᴘǫʀsᴛᴜᴠᴡxʏᴢ",
},
2: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"ᵃᵇᶜᵈᵉᶠᵍʰᶦʲᵏˡᵐⁿᵒᵖᵠʳˢᵗᵘᵛʷˣʸᶻᵃᵇᶜᵈᵉᶠᵍʰᶦʲᵏˡᵐⁿᵒᵖᵠʳˢᵗᵘᵛʷˣʸᶻ",
},
3: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"ℂℍℕℙℚℝℤ",
},
4: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"",
},
5: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"",
},
6: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"",
},
7: {
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:
"",
},
};
function changeFont(text) {
let output = "";
for (const font in fonts) {
const alphabet = Object.keys(fonts[font])[0];
const transformedText = text
.split("\n")
.map((line) =>
line
.split("")
.map((char) =>
alphabet.includes(char)
? fonts[font][alphabet][alphabet.indexOf(char)]
: char
)
.join("")
)
.join("\n");
output += transformedText + "\n\n";
}
return output;
}
let newText = "Feliz, Navidad.";
let orignalText = "Feliz, Navidad.";
function changeInputText() {
newText = changeFont(orignalText);
console.log(newText);
}
```
The output it gives is this:
```
ғᴇʟɪᴢ, ɴᴀᴠɪᴅᴀᴅ.
ᶠᵉˡᶦᶻ, ⁿᵃᵛᶦᵈᵃᵈ.
�, �.
�, �.
�, �.
�, �.
�, �.
```
As you can see, The first to lines are correct but other are weird and look corrupted.
Its self explanatory, i expected the converter to convert the text into different fonts correctly but instead only 2 fonts were correct and others were looking weird and corrupted. |
I got it to work!
Go to https://build.docker.com/accounts/YourAccountName/builders
Define a new builder there.
Then try the create again with the name of the builder you just defined!
If you have problems, https://build.docker.com might be a better starting point.
You just need to define a builder on the cloud before you can docker buildx create it |
We are looking at building a solution where multiple users are viewing rows in a database in different ways and some of those users are updating columns in those rows so that another group of users see their updates.
The solution we used to use is Excel on a MS Sharepoint website. Each row in the Excel document represented rows in the database, and whenever a user wanted to update a column in a row, they would change a cell in the Excel document and that would be almost immediately broadcast to any other user also viewing that Sharepoint Excel document.
The problem arose when too many people were viewing the document and the security level on an Excel document is practically non-existed.
What other solutions are out there besides:
a) SqlServer database with website front end
b) SqlServer database with fat pc client front end
c) Cloud db (like Snowflake) with some other front end?
|
How could i add a bookmark and tabs function in PySide6? |
|python|browser|pyside6|qwebengineview| |
null |
I was looking into this today and the answer from @gajam is correct.
You can double check the database you are using and verify that the authentication type you are using (ie. MySQL server) for <user/root> is not set to "sha256_password” or “caching_sha2_password”. You can check via MySQL WorkBench > Users and Privileges.
If this is set and you are using pyMySQL to connect then you will have to install the RSA option (as commented by @gajam)
Note that I was also using SQLAlchemy when I stumbled upon this issue.
 |
xy = Score(list(cox_ph), formula=Surv(time,status)~1,data=cox.train, metrics=c("brier","auc"), null.model=FALSE,times=time.int,debug = TRUE)
Extracted test set and prepared output object
Trained the model(s) and extracted the predictions
merged the weights with input for performance metrics
added weights to predictions
Error in sindex(ind.controls[controls.index1 | controls.index2], 0) :
missing data
I have tried debugging but its not giving the exact reason |
I am trying to find brier score of survival data using risk regression pacakge, but its giving an error |
|r|auc|cox-regression|survival| |
null |
Here's one approach:
* Use [`pd.json_normalize`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html) with 'type' as metadata (include the same levels expect 'prices').
```python
df = pd.json_normalize(red,
record_path=['data', 'events', 'markets', 'outcomes', 'prices'],
meta=[['data', 'events', 'markets', 'outcomes', 'type']]
)
df.head()
numerator denominator decimal displayOrder priceType handicapLow \
0 13 4 4.25 1 LP None
1 63 100 1.63 1 LP None
2 4 1 5.00 1 LP None
3 11 10 2.10 1 LP None
4 13 20 1.65 1 LP None
handicapHigh data.events.markets.outcomes.type
0 None MR
1 None MR
2 None MR
3 None --
4 None --
```
* Now, use [`df.loc`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html) with [`Series.eq`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html) to get back all the values from column 'decimal' where our meta column equals 'MR'.
```python
meta_col = 'data.events.markets.outcomes.type'
decimals = df.loc[df[meta_col].eq('MR'), 'decimal']
decimals.head(5)
0 4.25
1 1.63
2 5.00
20 3.40
21 2.40
Name: decimal, dtype: float64
```
Here the index values (`0, 1, 2, 20, 21`) refer to the rows where 'type' equals 'MR'. |
At startup it gives error 500:
Error: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: object. You probably forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
Git of the project: https://github.com/MelnikVladyslav/ProxySeller
Tried to see if Webpack is configured correctly |
Error: Element type is invalid: expected a string or a class/function but got: object. Error 500 in React |
|reactjs|webpack|server-side-rendering| |
null |
Any recommendation on the best way to change the partition schema location in PostgresSQL. The statement to change is
alter table tablename Set schema schemaname
Is there any pre work before running the schema change script
Thanks |
Update partition schema name |
|postgresql| |
Using the idea in [What do the flags in /proc/cpuinfo mean?](https://unix.stackexchange.com/q/43539/44425). Unfortunately there's no source code available in Linux commenting the ARM64 feature names like in x86 so I had to use the version in [golang][1]'s [cpu][2] [package][3]
```bash
ARM64_FEATURES="$(wget -qO- https://github.com/golang/sys/raw/master/cpu/cpu.go \
| awk '/ARM64/,/}/')"
for feature in $(grep "^Features" /proc/cpuinfo | sort -u | cut -d":" -f2); do
printf "${feature}\t"
echo "$ARM64_FEATURES" | grep -i "Has${feature}\s" | sed 's#.* // ##'
done | column -t -s $'\t'
```
Please also check [the above question][4] for details, because as mentioned there, not all flags are printed to the `Features` line on ARM as each manufacturer may have their own extensions
Sample output for your case
```
fp Floating-point instruction set (always available)
asimd Advanced SIMD (always available)
evtstrm Event stream support
aes AES hardware implementation
pmull Polynomial multiplication instruction set
sha1 SHA1 hardware implementation
sha2 SHA2 hardware implementation
crc32 CRC32 hardware implementation
atomics Atomic memory operation instruction set
fphp Half precision floating-point instruction set
asimdhp Advanced SIMD half precision instruction set
cpuid CPUID identification scheme registers
asimdrdm Rounding double multiply add/subtract instruction set
jscvt Javascript conversion from floating-point to integer
fcma Floating-point multiplication and addition of complex numbers
dcpop Persistent memory support
asimddp Advanced SIMD double precision instruction set
asimdfhm Advanced SIMD multiplication FP16 to FP32
```
[1]: https://pkg.go.dev/golang.org/x/sys/cpu
[2]: https://cs.opensource.google/go/x/sys/+/master:cpu/cpu.go
[3]: https://github.com/golang/sys/blob/master/cpu/cpu.go
[4]: https://unix.stackexchange.com/a/43563/44425 |
After installing and adding dotnet ef to vs code I entered this command in the terminal.
> dotnet ef migrations add Initial
Then I got this output:
> Could not execute because the specified command or file was not found. Possible reasons for this include: \* You misspelled a built-in dotnet command. \* You intended to execute a .NET program, but dotnet-ef does not exist. \* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've verified that dotnet ef has been added to the executable path.
I've tried re-installing, and everything else.
It simply will not recognize dotnet ef. |
wp-now can't find SQLite |
|wordpress| |
I had a similar issue and am now using this Python package to solve it. https://github.com/emileindik/slosh
```bash
$ pip install slosh
$ slosh ubuntu@1.1.1.1 --save-as myserver
```
This will perform the desired ssh connection and also create an entry in your SSH config file that looks like
```
Host=myserver
HostName=1.1.1.1
User=ubuntu
```
The same entry can be updated by using the same alias name. For example, adding a .pem file to the connection:
```bash
$ slosh -i ~/.ssh/mykey.pem ubuntu@1.1.1.1 --save-as myserver
```
It currently supports a number of `ssh` options but let me know if there additional options that should be added!
[disclaimer] I created `slosh` |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post. Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow. If you are interested in starting a more general conversation about how to approach a technical issue or concept, feel free to make another Discussion post. |
null |
I'm trying to create a Homebrew formula for a Python project.
Here's the Homebrew formula:
```ruby
class Scanman < Formula
include Language::Python::Virtualenv
desc "Using LLMs to interact with man pages"
url "https://github.com/nikhilkmr300/scanman/archive/refs/tags/1.0.1.tar.gz"
sha256 "93658e02082e9045b8a49628e7eec2e9463cb72b0e0e9f5040ff5d69f0ba06c8"
depends_on "python@3.11"
def install
virtualenv_install_with_resources
bin.install "scanman"
end
test do
# Simply run the program
system "#{bin}/scanman"
end
end
```
Upon running the application with the installed scanman version, it fails to locate my custom modules housed within the src directory.
```bash
ModuleNotFoundError: No module named 'src'
```
Any insights into why this is happening?
Here's my directory structure if that helps:
```bash
This is my directory structure:
```bash
.
├── requirements.txt
├── scanman
├── scanman.rb
├── setup.py
└── src
├── __init__.py
├── cli.py
├── commands.py
├── manpage.py
├── rag.py
└── state.py
```
The main executable is `scanman`.
It's worth noting the following:
* When I run the local version of `scanman` from my repository, it works absolutely fine.
* Other 3rd party packages installed from PyPI don't throw any error. I can't find them in `/usr/local/Cellar/scanman/1.0.1/libexec/lib/python3.11/site-packages/`, however. |
Homebrew formula for Python project |
|python|homebrew| |
null |
I have a react native app with an android native java module that accesses my local Google Fit healthstore using the Java Google Fit API:
DataReadRequest readRequest = new DataReadRequest.Builder()
.enableServerQueries()
.aggregate(DataType.AGGREGATE_STEP_COUNT_DELTA)
.bucketByTime(interval, TimeUnit.SECONDS)
.setTimeRange(start, end, TimeUnit.MILLISECONDS)
.build();
Fitness.getHistoryClient(getReactContext(), getGoogleAccount())
.readData(readRequest)
.addOnSuccessListener(response -> {
for (Bucket bucket : response.getBuckets()) {
for (DataSet dataSet : bucket.getDataSets()) {
readDataSet(dataSet);
}
}
try {
getCallback().onComplete(getReport().toMap());
} catch (JSONException e) {
getCallback().onFailure(e);
}
})
.addOnFailureListener(e -> getCallback().onFailure(e));
My problem is that for some `start` and `end` intervals for a particular user, the code gets stuck in the `HistoryClient`'s `.readData(readRequest)`, never resolving to the `onSuccessListener` or `onFailureListener` callbacks. In one particular case, to correct this, I can vary the `start` or `end` date to reduce the range, and suddenly the history client returns a data response. There doesn't seem to be any pattern of this bug relative to the `start` and `end` of the `readRequest`. In this case, the range was only over a week or so.
I initially thought that some data samples in Google Fit may be corrupt, thus reducing the range of the request would miss these samples, hence explaining why it may suddenly work by tinkering with `start` and `end`. However, by repositioning the `start` and `end` to explicitly cover these suspected samples, Google Fit works normally and a response is returned. I can timeout the async call using a `CompletableFuture`, therefore I know there is a `.readData` thread spinning in there somewhere! No exception is thrown.
I have set up all relevant read permissions in my google account's oAuth credentials - I can verify in my user account settings that the connected app indeed has these health data read permissions. The scope I request in the native code is
DataType.AGGREGATE_STEP_COUNT_DELTA, FitnessOptions.ACCESS_READ
and I am using
'com.google.android.gms:play-services-fitness:21.1.0'
'com.google.android.gms:play-services-auth:21.0.0'
in my android build file. I have noticed the problem for both `react-native 0.65.3` (android `targetSdkVersion 31`, `compileSdkVersion 31`) and `react-native 0.73.2` (android `targetSdkVersion 34`, `compileSdkVersion 34`).
Are there any further steps I can take to diagnose the bug? When viewing the date range in Google Fit app, I see no problem and the step counts are there. |
{"OriginalQuestionIds":[30720665],"Voters":[{"Id":5577765,"DisplayName":"Rabbid76","BindingReason":{"GoldTagBadge":"pygame"}}]} |
I've been working on a project where I need to access data from api's. In these api's are values named 'decimal' which I need to access. I have no problem accessing and displaying the decimal value itself, but when I try to only display decimal value based on another value within the json I seem to struggle. My code currently looks like this:
```
import pandas as pd
import requests as r
api = 'https://content.toto.nl/content-service/api/v1/q/event-list?startTimeFrom=2024-03-31T22%3A00%3A00Z&startTimeTo=2024-04-01T21%3A59%3A59Z&started=false&maxMarkets=10&orderMarketsBy=displayOrder&marketSortsIncluded=--%2CCS%2CDC%2CDN%2CHH%2CHL%2CMH%2CMR%2CWH&marketGroupTypesIncluded=CUSTOM_GROUP%2CDOUBLE_CHANCE%2CDRAW_NO_BET%2CMATCH_RESULT%2CMATCH_WINNER%2CMONEYLINE%2CROLLING_SPREAD%2CROLLING_TOTAL%2CSTATIC_SPREAD%2CSTATIC_TOTAL&eventSortsIncluded=MTCH&includeChildMarkets=true&prioritisePrimaryMarkets=true&includeCommentary=true&includeMedia=true&drilldownTagIds=691&excludeDrilldownTagIds=7291%2C7294%2C7300%2C7303%2C7306'
re = r.get(api)
red = re.json()
#Finding the 'type' variable within the Json file
match_result = pd.json_normalize(red, record_path=['data', 'events', 'markets', 'outcomes'])
match_result = match_result['type']
#For every type variable within the file we check if type == 'MR'
for type in match_result:
if type == 'MR':
#If the type == 'MR' I want to print the decimal belonging to that type value but this is where i'm doing something wrong
decimal = pd.json_normalize(match_result, record_path=['prices'])
decimal = decimal['decimal']
print(decimal)
else:
pass
```
I've been looking all over youtube and stackoverflow to find what im doing wrong but can't seem to figure it out. Also important to note is that in the variable 'match_result' I access the type by using the record_path, but for the 'decimal variable' I need to go further into the record path with using 'prices' in the for loop. This is where I think i'm doing it wrong, but still no idea what it is i'm doing wrong.
The Json file from which I want to get the data looks something like this:
"type": "MR",
"subType": "D",
#Some more data I wont need....
"lateVoid": false,
"outcomeScore": null,
"prices": [
{
"numerator": 13,
"denominator": 4,
"decimal": 4.25,
"displayOrder": 1,
"priceType": "LP",
"handicapLow": null,
"handicapHigh": null
|
```ts
import { A } from "https://example.com/type.d.ts";
type B = A | A
```
[Playground](https://www.typescriptlang.org/play?#code/JYWwDg9gTgLgBAbzgQTgXzgMyhEcBEAFjDGAM4BcA9FQKYAeAhuADa0B0AxrlTAJ5gOAE3Ywy+ANwAofoLgAhOAF4UcAD4ogA)
I understand that `A` may not be cached already, but shouldn't `B` have type `A` instead of `any`? |
Why does type union return any? |
|typescript| |
It shows the available space where the element can be expanded.
**Example:**
This is an example with just one character. It is possible to see the dashed area after the text. It means that it is the area where the text can possibly be expanded.
[![enter image description here][1]][1]
Now let's add more characters in our example. It can be seen that the length of *the purple dashed line area is decreased*:
[![enter image description here][2]][2]
It is possible to run the following code snippet and open dev tools to see that purple dashed area:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
*, html, body {
box-sizing: border-box;
margin: 0;
}
div {
position: relative;
background-color: lightgreen;
}
button {
display: flex;
width: 100px;
}
<!-- language: lang-html -->
<div>
<button>1</button>
</div>
<!-- end snippet -->
[1]: https://i.stack.imgur.com/VtH2S.png
[2]: https://i.stack.imgur.com/9TtN1.png |
Is there a solution to accomodate multiple users making updates to the same database` |
|database|concurrency| |
```
class A;
function int foo();
int a;
return ++a;
endfunction
endclass
program tb;
A a =new;
int b, c;
initial begin
for (int i = 0; i < 10; i++) begin
b = a.foo();
c = foo(); // Calls a different foo() outside of the class A
$display("B = %0d", b);
$display("C = %0d", c);
end
end
function int foo();
int a;
return ++a;
endfunction
endprogram
```
I thought since a is getting initialised every function call both and B and C values is going to 1 for the entire for loop. But I ran this in online EDA playground it printed out this result.
```
B = 1
C = 1
B = 1
C = 2
B = 1
C = 3
B = 1
C = 4
B = 1
C = 5
B = 1
C = 6
B = 1
C = 7
B = 1
C = 8
B = 1
C = 9
B = 1
C = 10
$finish at simulation time 0
V C S S i m u l a t i o n R e p o r t
```
Can someone explain how the value of C is incremented here? |
What would right value of B and C from this code? |
|verilog|system-verilog| |
## Dependency unavailable for Python 3.11 or later
Although the `pyocd` package is technically `noarch` (i.e., independent of Python version), it depends on `cmsis-pack-manager`, which has [failed to be rebuilt for Python 3.11+](https://github.com/conda-forge/cmsis-pack-manager-feedstock/pull/3). It would be encouraging to maintainers to comment on that Pull Request to voice your desire that this be built. Or if, you have insight into why it fails, contribute a PR.
Otherwise, you'll need to instead install the package in a Python 3.10 or earlier environment. For example,
```bash
conda create -n py310 -c conda-forge python=3.10 pyocd
```
|
I have the following code exported as CommonService.tsx and called as **COMMON.redirect('/');**
import { useNavigate} from "react-router-dom";
export function Redirect(route: string) {
const navigate = useNavigate();
return navigate(route);
};
const COMMON = {
redirect: Redirect,
};
export default COMMON;
But after the call of the **COMMON.redirect('/');** I am getting the following exception:
Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons:
1. You might have mismatching versions of React and the renderer (such as React DOM)
2. You might be breaking the Rules of Hooks
What should be the right usage of the **useNavigate** in the Shared Service (I would like to call one method instead of import useNavigate in several components). |
React Invalid hook call on the useNavigate in exported Service |
|reactjs|react-hooks|react-router| |
This is an alternative version that i prefer to use, which uses less window functions, the downside is a bit more complicated code:
```sql
DECLARE @YourTable TABLE ([Split_Name_In_Heading] varchar(50),[ID] int)
INSERT INTO @YourTable
VALUES ('Central Scheduling (cont''d)',171081)
,(NULL,171083)
,(NULL,171088)
,(NULL,171091)
,(NULL,171094)
,(NULL,171097)
,('Other Item (cont''d)',181081)
,(NULL,181083);
SELECT *
, STUFF(
MAX(
RIGHT(REPLICATE('0', 20) + CAST(id AS VARCHAR(20)), 20) + Split_Name_In_Heading
) OVER(ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
, 1, 20, '') AS GroupedHeading
FROM @YourTable
```
Explanation:
We want to get the maximum value which helps us remove NULLs, but it has to be sorted by ID. For this reason, i create a padded string which consists of ID and Header, something like:
| Split_Name_In_Heading | Padded string |
| ----- | ----- |
| Central Scheduling (cont'd) | 00000000000000171081Central Scheduling (cont'd) |
| Other Item (cont'd) | 00000000000000181081Other Item (cont'd) |
The `RIGHT(REPLICATE('0', 20) + CAST(id AS VARCHAR(20)), 20)` padding takes care of integers with different lengths.
Finally, we're ready to get the maximum value by using the `MAX(...) OVER()` window function. But this gets us the padded string, so `STUFF(..., 1, 20, '')` removes the padding.
Final output:
| Split_Name_In_Heading | ID | GroupedHeading |
| ----- | ----- | ----- |
| Central Scheduling (cont'd) | 171081 | Central Scheduling (cont'd) |
| NULL | 171083 | Central Scheduling (cont'd) |
| NULL | 171088 | Central Scheduling (cont'd) |
| NULL | 171091 | Central Scheduling (cont'd) |
| NULL | 171094 | Central Scheduling (cont'd) |
| NULL | 171097 | Central Scheduling (cont'd) |
| Other Item (cont'd) | 181081 | Other Item (cont'd) |
| NULL | 181083 | Other Item (cont'd) |
|
This should be work. Your `FIELDS` `SIZE` `TYPE` AND `COUNT` are not matchmaking.
header = """# .PCD v.7 - Point Cloud Data file format
VERSION .7
FIELDS x y z data
SIZE 4 4 4 4
TYPE F F F U
COUNT 1 1 1 1
WIDTH 0
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 0
DATA binary
""" |
In EF terms it seems like it could just be:
var d = db.Users
.OrderByDescending(u => u.Xp)
.AsEnumerable()
.Select((u, i) => new { Rank=i+1, User = u })
.ToDictionary(at => at.Rank, at=>at.User)
You list all your users out of the DB in order of XP, and you use the overload of `Select((user,index)=>...)` to assign the rank (the index + 1) and for convenience let's put them in a dictionary indexed by rank
Now you have a dictionary where, for any given user you can get the surrounding users by the rank of the user you already know:
var user = d.Values.FirstOrDefault(u => u.Name == "user6");
var prev = d[user.Rank - 1];
var next = d[user.Rank + 1];
Just need some code to prevent a crash if you find the top or bottom ranked user (look at TryGetValue) |
I am using Java Spring Boot for my project and I have the following controller:
@AllArgsConstructor
@RestController
@RequestMapping("/api/subject")
public class SubjectController {
private SubjectService subjectService;
@PostMapping
public void createSubject(@RequestBody SubjectCreationDTO subjectCreationDTO) {
LoggingController.getLogger().info(subjectCreationDTO.getTitle());
// subjectService.createSubject(subjectCreationDTO);
}
}
And SubjectCreationDTO:
@AllArgsConstructor
@Getter
@Setter
public class SubjectCreationDTO {
private String title;
}
So I get this error when making a POST request:
> JSON parse error: Cannot construct instance of
> `pweb.examhelper.dto.subject.SubjectCreationDTO` (although at least
> one Creator exists): cannot deserialize from Object value (no
> delegate- or property-based Creator)"
I can solve this error by adding @NoArgsConstructor to SubjectCreationDTO, but why is this necessary, when in other cases, I have the almost exactly the same case.
@PostMapping
public ResponseEntity<StudentDTO> createStudent(@RequestBody StudentCreationDTO studentCreationDTO) {
StudentDTO savedStudent = studentService.createStudent(studentCreationDTO);
return new ResponseEntity<>(savedStudent, HttpStatus.CREATED);
}
and this is the StudentCreationDTO class:
@AllArgsConstructor
@Getter
@Setter
public class StudentCreationDTO {
private String username;
private String firstName;
private String lastName;
private String email;
}
I have figured it out that in case of having more than just one field, you do not have to specify @NoArgsConstructor and Jackson Library can parse the input JSON from the body just as fine. My question is why it has this behavior, and why it can't parse if I have only one field in the class without the default constructor, but it can if I have multiple fields? |
Jackson Library for controllers in Java Spring |
|java|spring|spring-mvc|controller|jackson| |
|algorithm| |
Why is Entity FrameworkCore not being recognised when it has been added to the executable Path |
|.net|visual-studio|entity-framework| |
null |
I am trying to retrieve data from Oracle Sequence using JPA as below:
```java
@Repository
public interface SequenceRepository extends JpaRepository<MyEntity, Long> {
@Query(value = "SELECT %s.NEXTVAL FROM dual", nativeQuery = true)
Long getSequenceValue(String sequenceName);
}
```
Calling above method from Service class as:
Long nextVal = sequenceRepository.getSequenceValue("MASTER.TestSequence");
While doing so, I am getting error as:
>could not extract Resultset; nested exception is oracle.hibernate.exception.SQLGrammarException
>
>oracle.jdbc.OracleDatabaseException: ORA-00911: invalid character
How to pass sequence name as argument to @Query & would it cause SQL injection?
|
null |
I use Vue3, Nuxt3 and bootstrap5. I need to render iframe dynamically. But iframe does not shown, there is not src attribute at all.
//template
```
<div>
<iframe
:src="currentDoc.src"
allowfullscreen="true"
frameborder="0"
scrolling="no"
width="560"
height="400"
:title="currentDoc.title">
</iframe>
</div>
```
/// <script>
```
import Docs from '@/.../docs';
export default {
data() {
return {
docs: Docs,
currentDoc: {},
}
},
mounted() {
this.id = this.$route.params.id;
this.currentDoc = this.docs.find(doc => doc.id == this.id);
}
}
``` |
How to implement dynamic src for iFrame in Nuxt 3 |
|iframe|vuejs3|nuxt3| |
You can install any application/packages with brew on mac. If you want to know the exact command just search your package on [https://brewinstall.org][1] and you will get the set of commands needed to install that package.
First open terminal and install brew
/bin/bash -c "$(curl -fsSL raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Now Install jq
brew install jq
[1]: https://brewinstall.org |
```ts
const MyComponent = (): JSX.Element => {
const handleRef = (ref: HTMLDivElement | null) => {
console.log('success');
};
return (
<div ref={handleRef} />
);
};
```
guess like this it should work
the `ref` is just a callback function
try like this:
```ts
import { MutableRefObject, useEffect } from 'react';
export default function useIntersection<T extends HTMLElement, U extends HTMLElement>(el: HTMLDivElement | null, nodeCallback: () => U) {
const ref = useRef<HTMLDivElement>(null!)
useEffect(() => {
ref.current = el;
}, [el])
useEffect(() => {
function callback([entry]) {
ref.current.style.opacity = entry.isIntersecting ? '0' : '';
}
const node = nodeCallback();
const observer = new IntersectionObserver(callback, {
root: node
});
observer.observe(ref.current);
return () => observer.unobserve(ref.current);
}, [ref.current]);
}
```
this with ref available in and out:
```
import { forwardRef, useImperativeHandle } from 'react';
const DownArrow = require('@site/static/img/down-arrow.svg').default;
import styles from './index.module.css';
import useIntersection from '@site/src/hooks/useIntersection';
export default forwardRef<HTMLDivElement, {}>(function MoreIcon({}, ref): JSX.Element {
const innerRef = useRef<HTMLDivElement>(null!);
useImperativeHandle(ref,() => innerRef.current)
useIntersection(innerRef , () => document.querySelector('footer'));
return (
<div className={styles.arrow} ref={ref}>
<DownArrow />
</div>
);
});
```
|
I am attempting to make a depth-distribution chart of abundance and biomass. The chart is centered around zero with abundance essentially acting as the 'negative' values and biomass essentially acting as the 'positive' values. Similar to what is happening [here](https://i.stack.imgur.com/98NYE.png). I have this for two seasons and used a facet wrap to have the two charts all on one page. However, I am running into issues when I want to center the graph around zero while **still having the bottom x-axis match up**.
Essentially, my main issue is this is all happening in a for loop as I am doing this for over 200 different species and am unable to just individually indicate what the x-axis is. In order to have the graphs centered around zero, I utilized a scale per line item to have the biomass or abundance values scale up to the maximum value. That worked great and I was even able to get the original values put next geom_rect so it's got the right values. However, since I was scaling up the x-axis at the bottom has now changed.
I was wondering if there was a way to get the x-axis to show the original (an therefore actual) values while maintaining the scaling I did?
Below is my code. One thing to note is that I have it set as one individual species right now just so I could view the ggplot but the for loop is working fine otherwise!
```
value <- subset(df_combined, LTU == c("Genus sp."))
xmin <- abs(min(value$value))
xmax <- abs(max(value$value))
scale <- ifelse(xmax>xmin,
xmax/xmin,
xmin/xmax)
scale2=xmax/xmin
value <- value %>%
mutate(value2 = case_when(
scale2 > 1 & strip == "Abundance" ~ value * scale,
scale2 < 1 & strip == "Biomass" ~ value * scale,
TRUE ~ value
))
p<- ggplot(data = value) +
geom_rect(mapping = aes(xmin = 0, xmax = value2,
ymin = Min.Depth, ymax = Max.Depth,
fill = factor(side))) +
geom_text(mapping = aes(x = value2,
y = (Min.Depth + Max.Depth)/2,
hjust = 1 - (side > 0),
label = paste0('(', round(abs(value), digits=2), ')'))) +
geom_vline(xintercept = 0, linewidth = 2) +
scale_y_continuous(breaks = c(0, 70, 450, 700, 1000, 1500),
labels = c('0', '70', '450', '700', '1000', '1500'),
expand = expansion(mult = c(0, 0)),
trans = 'reverse') +
scale_x_continuous(labels=abs, position="bottom", expand = c(0.2,0))+
coord_cartesian(ylim=c(1500,0)) +
facet_wrap(strip ~ ., scales = "free_x",
labeller = label_parsed) +
facet_grid(cols = vars(Season)) +
labs(y = 'Depth (m)') +
scale_fill_manual(values = c('black', 'grey'),
breaks=c(-1, 1),
labels=c("Abundance (n/10,000 m^3)", "Biomass (g/10,000 m^3)")) +
theme(panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y= element_line(colour = "#AAAAAA", linewidth = 1,
linetype = "dashed"),
panel.background = element_blank(),
panel.border=element_rect(color="black", fill=NA),
# panel.spacing = unit(0, "lines"),
plot.tag = element_text(hjust = 0, vjust = 0, face = 'bold'),
plot.tag.position = c(0, 0),
axis.title.x.bottom = element_blank(),
axis.ticks.y = element_blank(),
axis.ticks.length.x = unit(10, 'pt'),
axis.line.x = element_line(),
#strip.placement = 'outside',
#strip.background = element_blank(),
legend.position = 'right',
text = element_text(size = 10)) +
guides(fill= guide_legend(title = "Legend")) +
labs(y = "Depth (m)")
```
And [here](https://i.stack.imgur.com/xfM5u.jpg) is the output I am getting! Note that the values are stated on the chart but the x-axis at the bottom doesn't match up for the 'Biomass' side of the graph. Values are noted so you can remake the dataframe but just let me know if you need anything else!
Thank you for any and all help! |
How to keep original x-axis after utilizing a custom scaling for the geom_rect in ggplot? |
|r|ggplot2|rstudio|scale|x-axis| |
null |
I’m deleting this post because Discussions should not be used to draw more attention to existing Questions on Stack Overflow. This Discussions space is intended for more general conversations about technical concepts, including subjective opinions (see the [Discussions guidelines]([https://stackoverflow.com/help/discussions-guidelines)). If you have an idea for something that would be interesting to discuss feel free to make a new Discussion post. |
null |
{"OriginalQuestionIds":[13606075],"Voters":[{"Id":3440745,"DisplayName":"Tsyvarev"},{"Id":2511795,"DisplayName":"0andriy"},{"Id":839601,"DisplayName":"gnat"}]} |
```
$('.select2-select').select2({
maximumSelectionSize: 1
}).on('select2-opening', function(e) {
if ($(this).select2('val').length > 0) {
e.preventDefault();
}
});
```
Above code works well with old versions of select2. It only allow users to select at most 1 item and if an item selected, then dropdown won't be shown as customized 'select2-opening' event.
However, with newer versions of select2, this code doesn't work.
Actually 'select2-opening' has changed into 'select2:opening' and the code should be
```
$('.select2-select').select2({
maximumSelectionSize: 1
}).on('select2:opening', function(e) {
if ($(this).select2('val').length > 0) {
e.preventDefault();
}
});
``` |
I went back to the old project - Windows Forms app in .NET Framework which compiled version works fine. I've been trying to run this project but I got error:
`System.MissingMethodException: No parameterless constructor defined for this object.`
I'm using Microsoft.Extensions.DependencyInjection. Obviously all needed services are registered and all of them have apropiate constructor (all with parameters) and as I mentioned before - this project had worked without any problem few months ago.
Unfortunately I can't insert the original code but these are normal type classes like:
```
public class SomethingService : ISomethingService
{
private readonly IRepository repository;
private readonly ILogger<ISomethingService> logger;
public SomethingService(IRepository repository,
ILogger<ISomethingService> logger)
{
this.repository = repository;
this.logger = logger;
}
}
```
Registered like:
`services.AddTransient<ISomethingService, SomethingService>();`
I've reinstalled all nuggets, been trying to re-define constructors, change the order of certain dependencies execution and move the project to other environments - always the same story. |
I’m deleting this post because Discussions should not be used to draw more attention to existing Questions on Stack Overflow. This Discussions space is intended for more general conversations about technical concepts, including subjective opinions (see the [Discussions guidelines](https://stackoverflow.com/help/discussions-guidelines)). If you have an idea for something that would be interesting to discuss feel free to make a new Discussion post. |
***The Problem is that the Key "4255" always refreshes when i type it in the console and it never expires, and how can i set the expiration date for many keys and not only one.
I want the Key to expire in the 1 minute and store that information somewhere so it doesnt refresh everytime i reuse the key.***
```
import random
from datetime import datetime, timedelta
def generate_key():
key = str("4255")
expiration_time = datetime.now() + timedelta(days=0, minutes=1) #Key expires in 1 day and 30 minutes
return key, expiration_time
def is_expired(expiration_time):
return datetime.now() > expiration_time
correct_key, expiration_time = generate_key()
print("# Generated key", correct_key)
user_input = input("Enter your license Key:")
if expiration_time == timedelta:
print("your425 Key expired", (expiration_time))
if user_input == correct_key:
print("Successfully Loaded")
if user_input == correct_key:
print("Expiration:",(expiration_time))
else:
print("Key Expired")
input("Press any Key to close")
```
The Key Should start the expiration date at the first use and somewhere store the information. |
Set Expiration Date in python for specific keys |
|python|arrays|store| |
null |
This can be handled in Angular on the component side instead of the html by using this routine.
- The reference to the collection is made
- A subscription to that reference is created.
- we then check each document to see if the field we are looking for is empty
- Once the empty field is found, we store that document id in a class variable.
First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice.
This component will contain these exported routines (see this post for that component firestore.service.ts:
https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore
In your component, import this service and make a
$ private firestore: Firestoreservice;
reference in your constructor.
Done!
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name
this.firestoreData.subscribe(firestoreData => {
for (let dID of firestoreData ) {
if (dID.field1 == this.ClassVariable1) {
if (dID.field2 == '') {
this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later
}
}
}
} );
<!-- end snippet -->
|
{"Voters":[{"Id":1631193,"DisplayName":"Gabe Sechan"},{"Id":7867822,"DisplayName":"Anurag Srivastava"},{"Id":2851937,"DisplayName":"Jay"}]} |
use `Cross Join` and `Subquery` to get your desired result
```
select AgeGroup,NumberOfFans/CONVERT(decimal(4,7), NumberOfFans2)
from
(Select Date,
AgeGroup,
SUM(NumberOfFans)
From FansPerGenderAge
WHERE date = (SELECT max(date) from FansPerGenderAge)
GROUP BY AgeGroup) a
join (
Select
SUM(NumberOfFans) NumberOfFans2
From FansPerGenderAge
WHERE date = (SELECT max(date) from FansPerGenderAge)) b
on 1=1
``` |
You need to move tooltip initialization in [ngAfterViewInit()](https://angular.io/api/core/AfterViewInit) hook instead, when the view is initialized:
ngAfterViewInit() {
var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]'))
var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) {
return new Tooltip(tooltipTriggerEl)
});
}
__[stackblitz demo](https://stackblitz.com/edit/angular-17-starter-project-wkbz4y?file=src%2Fmy-component%2Fmy-component.component.ts)__ |
I installed:
npm install @visurel/iconify-angular
npm i iconify
then in shared folder i creatd an icon component and then pasted in icon.component.ts from the docs:
https://www.npmjs.com/package/@visurel/iconify-angular
import { Component } from '@angular/core';
import home from '@iconify/icons-mdi/home';
import groupAdd from '@iconify/icons-mdi/group-add';
import bellSlash from '@iconify/icons-fa-solid/bell-slash';
@Component({
selector: 'ic-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
homeIcon = home;
groupAddIcon = groupAdd;
bellSlashIcon = bellSlash;
}
but i got an error:
TS2307: Cannot find module '@iconify/icons-mdi/home' or its corresponding type declarations.
How to fix it and install iconify to my angular project? |
Cant install Iconify in Angular17 |
|javascript|angular| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.