instruction
stringlengths 0
30k
⌀ |
---|
I have set up firebase analytics for my app including some custom events. Events are correctly displayed in the firebase console, and I can see for instance how many times an event has occurred in the specified time frame in total.
However, it would be interesting to see as well in what order these events are called to understand better how the user interacts with the app. Is there a way to display the occured events per user sessions. Or maybe even occured events in chronological order?
You can see this information in the realtime/debugview view and in crashlytics, but these are a special use cases...
Any advice would be welcomed! |
Google Firebase Event logging: why to see sequence of events |
|flutter|firebase|google-analytics| |
I am creating a website for the sale of goods, and for each product I create a card based on data from the database. When a new card appears, a dynamic page based on the same database data should be automatically connected to it. The page is created, but the data from the model is not loaded. I watched the Django guide, and it showed this method using DetailView, from where I took the code. Please help me, I don't understand what the problem is.
```
My code:
views.py:
class car(DetailView):
model = inventory
template_name = 'main/cars.html'
context_object_name = 'inventory'
urls.py:
urlpatterns = [
path('inventory/<int:pk>', views.car.as_view(), name='cars'),
]
cats.html:
{% extends 'main/layout.html' %}
{% block main %}
{% for el in invent %}
<div class="main-card ">
<img src = '{{ el.img_1 }}' style="">
<h3 style="">{{ el.name }}</h3>
<h4 style="">{{ el.rent }}</h4>
<button><a href="{% url 'cars' el.id %}">Details</a></button>
</div>
{% endfor %}
{% endblock %}
models.py:
class inventory(models.Model):
name = models.CharField('Name', max_length=100)
type = models.CharField('Type of car', max_length=6)
img_1 = models.ImageField(upload_to='./sql_imgs')
img_2 = models.ImageField(upload_to='./sql_imgs')
img_3 = models.ImageField(upload_to='./sql_imgs')
img_4 = models.ImageField(upload_to='./sql_imgs')
img_5 = models.ImageField(upload_to='./sql_imgs')
img_6 = models.ImageField(upload_to='./sql_imgs')
img_7 = models.ImageField(upload_to='./sql_imgs')
img_8 = models.ImageField(upload_to='./sql_imgs')
MSRP = models.CharField('msrp', max_length=40)
Purchase = models.CharField('purchase', max_length=40)
rent = models.CharField('rent', max_length=40)
specs = models.TextField('specs')
text = models.TextField('About car')
def str(self):
return self.name
class Meta:
verbose_name = 'Inventory'
verbose_name_plural = 'Inventory'
```
I've read a lot of articles, and I haven't found anything worthwhile. I hope I can find the answer here. |
YUV is a raw **pixel** format. Meaning that there is absolutely no information other than the pixels' intensity values in it.
If you want your VUI data:
- Pre-save the FPS elsewhere and/or use a custom delay (FPS) during the decoding process.
- Write your YUV data into a formats that supports timing info (_eg:_ `.y4m` format). |
In this line, `video_clip = VideoClip(image_clip)` of your code, you are treating `image_clip` as function. If you check the documentation for the `VideoClip` class, the first item in the parameter list is function named `make_frame` which is a function that generates frames at a given time. You seem to be using the `ImageClip` instance directly here. I was thinking of a solution where you pass a lambda function as the first parameter of the `VideoClip` class. The lambda function should take a time parameter and then call the `.get_frame()` function of the `ImageClip` instance passing in the time parameter.
Something like this:
video_clip = VideoClip(make_frame=lambda t: image_clip.get_frame(t)) |
I recently came across a similar project.
[The project is finally deployed here.][3]
I am using the latest version of Next.js and Docusaurus.
The most important step is to read the Next.js documentation.
I found the override method in the Next.js documentation.
[Rewrites allow you to map an incoming request path to a different destination path.][1]
// next.config.mjs
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
rewrites: async () => [
{
source: "/doc",
destination: "/doc/index.html",
},
],
};
export default nextConfig;
I have recorded the detailed solution process on my blog.
[How to use Docusaurus in Next.js projects?][2]
[1]: https://nextjs.org/docs/app/api-reference/next-config-js/rewrites
[2]: https://martinadamsdev.medium.com/how-to-use-docusaurus-in-next-js-projects-0292003fd3c8
[3]: https://next-docusaurus-martinadamsdev.vercel.app/doc |
HTML
```html
<span id="spn"></span>
```
JS :
```js
const span = document.getElementById("spn")
span.addEventListener("change", function(){
let value = span.innerText
// Your code
})
``` |
I've written the following code to compare the theoretical alpha = 0.05 with the empirical one from the buit-in t.test in Rstudio:
```
set.seed(1)
N=1000
n = 20
k = 500
poblacion <- rnorm(N, 10, 10) #Sample
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (i in 1:k){
muestra <- poblacion[sample(1:N, n)]
p[i] <- t.test(muestra, mu = mu.pob)$p.value
}
a_teo = 0.05
a_emp = length(p[p<a_teo])/k
sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp)
```
And it works printing both theoretical and empirical values. Now I wanna make it more general, to different values of 'n', so I wrote this:
```
set.seed(1)
N=1000
n = c(2, 10, 15, 20)
k = 500
for (i in n){
poblacion <- rnorm(N, 10, 10)
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (i in 1:k){
muestra <- poblacion[sample(1:N, length(n))]
p[i] <- t.test(muestra, mu = mu.pob)$p.value
}
a_teo = 0.05
a_emp = length(p[p<a_teo])/k
sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp)
}
```
But I don't get the 'print'. Any ideas about what is wrong? |
You may missing some intents such as 'GuildVoiceStates':
const client = new Client({
intents: [
GatewayIntentBits.Guilds,
GatewayIntentBits.GuildMessages,
GatewayIntentBits.MessageContent,
GatewayIntentBits.GuildMembers,
GatewayIntentBits.GuildVoiceStates,
],
});
Also it's better to set absolute path to the **resource**.
const path = require('path');
...
resource = createAudioResource(path.join(__dirname, 'sounds', 'alert.mp3')); |
I'm no linux expert but I'm trying to learn.
I wrote a simple bash script to run my vpn connection automatically upon start up.
#!/bin/bash
cyberghostvpn --country-code PL --connect
I works fine when I run it through my CLI
sudo ./cyberghostvpn-launcher.sh
I tried to wrap it in a service like this :
[Unit]
Description=VPN start
After=network-online.target
[Service]
ExecStart=/home/xxxx/Documents/cyberghostvpn-launcher.sh
[Install]
WantedBy=multi-user.target
I ran the enable command (OK)
`$ sudo systemctl enable vpn-launch.service`
But I get an error when trying to start the service :
Mar 30 12:18:07 uproxy cyberghostvpn-launcher.sh[59135]: TypeError: can only concatenate str (not "NoneType") to str
I can't understand what's wrong.
Any ideas folks?
thanks for your lightings
|
bash script works in CLI but fails in systemctl service |
|linux|systemctl| |
This can be handled in Angular on the component side instead of the html by using this routine.
- The reference to the collection is made
- A subscription to that reference is created.
- we then check each document to see if the field we are looking for is empty
- Once the empty field is found, we store that document id in a class variable.
First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice.
This component will contain these exported routines (see this post for that component firestore.service.ts:
https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore
In your component, import this service and make a
'private firestore: Firestoreservice;'
reference in your constructor.
Done!
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name
this.firestoreData.subscribe(firestoreData => {
for (let dID of firestoreData ) {
if (dID.field1 == this.ClassVariable1) {
if (dID.field2 == '') {
this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later
}
}
}
} );
<!-- end snippet -->
|
{"OriginalQuestionIds":[17615260],"Voters":[{"Id":2029983,"DisplayName":"Thom A","BindingReason":{"GoldTagBadge":"sql-server"}}]} |
This can be handled in Angular on the component side instead of the html by using this routine.
- The reference to the collection is made
- A subscription to that reference is created.
- we then check each document to see if the field we are looking for is empty
- Once the empty field is found, we store that document id in a class variable.
First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice.
This component will contain these exported routines (see this post for that component firestore.service.ts:
https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore
In your component, import this service and make a
private firestore: Firestoreservice;
reference in your constructor.
Done!
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name
this.firestoreData.subscribe(firestoreData => {
for (let dID of firestoreData ) {
if (dID.field1 == this.ClassVariable1) {
if (dID.field2 == '') {
this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later
}
}
}
} );
<!-- end snippet -->
|
null |
{"Voters":[{"Id":1266756,"DisplayName":"Volker"},{"Id":496038,"DisplayName":"pringi"},{"Id":839601,"DisplayName":"gnat"}],"SiteSpecificCloseReasonIds":[11]} |
This solution create groups for the "D"s and, for each one, identify the first "2A" position. With this information, an unique id is created. Look:
library(tidyverse) # (Edited)
df <- mutate(df, id = row_number())
aux <- df %>%
mutate(d_group = cumsum(if_else(activity == "D", 1, 0))) %>%
distinct(d_group, location, activity, .keep_all = TRUE) %>%
filter(location == 2, activity == "A", d_group > 0) %>%
pull(id)
df <- mutate(df, id = cumsum(if_else(dplyr::lag(id) %in% aux, 1, 0)) + 1)
rm(aux)
# ---------
> df
location activity id
1 2 A 1
2 3 B 1
3 3 C 1
4 2 D 1
5 1 D 1
6 2 B 1
7 2 A 1
8 2 A 2
9 1 B 2
10 3 A 2
11 3 C 2
12 1 D 2
13 2 A 2
14 3 B 3
15 2 B 3
16 2 D 3
17 1 A 3
18 2 D 3
19 3 D 3
20 2 A 3
21 1 C 4 |
[![cmd palette][1]][1]
Open menu *File* → *Preferences* → *Settings* and add the following setting:
```json
"workbench.colorCustomizations": {
"titleBar.activeBackground": "#553955" // Change this color!
},
"window.titleBarStyle": "custom"
```
[![result][2]][2]
From the following source tutorial:
*[Colorful Visual Studio Code titlebars for better productivity][3]*
[1]: https://i.stack.imgur.com/qjbQJ.png
[2]: https://i.stack.imgur.com/EtjrM.gif
[3]: https://medium.com/@camdenb/colorful-vscode-titlebars-for-better-productivity-b05a619defed |
null |
Starting from @Clemens answer, the TextBlock's Margin can be bound to the ActualWidth of the TextBlock, then setting the Left Margin equal to the ActualWidth / -2 in a converter results in no need to hardcode the Width and Margin of the TextBlocks:
```
<Grid>
<Grid.Resources>
<converters:ActualWidthToNegativeHalfLeftMarginConverter x:Key="ActualWidthToNegativeHalfLeftMarginConverter" />
</Grid.Resources>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="2*" />
<ColumnDefinition Width="1*" />
<ColumnDefinition Width="3*" />
</Grid.ColumnDefinitions>
<TextBlock Grid.Column="1"
Text="textOfTextBlock1"
HorizontalAlignment="Left">
<TextBlock.Margin>
<Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}" Converter="{StaticResource ActualWidthToNegativeHalfLeftMarginConverter}" />
</TextBlock.Margin>
</TextBlock>
<TextBlock Grid.Column="2"
Text="textOfTextBlock2"
HorizontalAlignment="Left">
<TextBlock.Margin>
<Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}" Converter="{StaticResource ActualWidthToNegativeHalfLeftMarginConverter}" />
</TextBlock.Margin>
</TextBlock>
</Grid>
```
```
public class ActualWidthToNegativeHalfLeftMarginConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
return new Thickness((double)value / -2, 0, 0, 0);
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
// ...
}
}
``` |
You are wrapping your code with a `try`/`catch` - good.
However you are catching a specific exception `EmailException` which my guess is that this is not getting thrown, so the exception bubbles up somewhere else.
If you changed `EmailException` to `Exception`, my guess is that you *still* won't catch the problem because my guess is that when you are running code outside of the NetBeans environment, some dependency is missing and this kind of error is thrown as a subclass of `java.lang.Error` - probably `LinkageError`. If you study the [class hierarchy][1] carefully, you will find that these errors are not subclasses of `Exception`.
My recommendation is to change the `try`/`catch` to catch `Throwable`... the highest point in the error/exception tree, then you will find out what is really going on.
Then you'll find that some dependency is missing which NetBeans is implicitly adding into the classpath for you.
[1]: https://docs.oracle.com/javase/8/docs/api/java/lang/LinkageError.html |
According to the VsCode documentation: [User Defined snippets variables](https://code.visualstudio.com/docs/editor/userdefinedsnippets#_variables) , one can use certain variables to properly customize the snippets. They also work with [Transformations](https://code.visualstudio.com/docs/editor/userdefinedsnippets#_variable-transforms), which make the custom snippets even more powerful.
I wonder if there is any way to create custom variables that can be used in the snippets' generation?
The documentation defined variables work just fine in the snippets generation. |
Can you create custom VsCode snippets variables? |
|visual-studio-code|variables|transformation|code-snippets| |
null |
I'd be really saved if someone could help me on this!
I've been working with an old open source C++ project from github that has not had an official update since Python 3.4: https://github.com/Mindwerks/plate-tectonics
Using a fresh install of Python 3.10.11 I've been able to compile this project in a PyCharm venv using the included setup.py file from this folder: https://github.com/Mindwerks/plate-tectonics/blob/master/pybindings/setup.py
When I compile this, I am able to manually copy+paste the resulting *.pyd file into Pycharm's venv\Lib\site-packages folder. This package runs exactly as it should.
However, for practical reasons, I want to switch to using a conda environment. I've created this environment (python 3.10.13, possibly Cython?) and followed the same steps that worked for PyCharm: open the environment in the conda terminal, run "python setup.py build" and copy the *.pyd file it compiles into the environment's \Lib\site-packages folder.
However, now if I try and run functions from the package, python immediately crashes.
I am completely stumped on what could be the problem and how to resolve it. I don't have much experience with C++ or compiling packages, and I don't get any error messages suggesting what I might be doing wrong. I am assuming conda environments have a different behavior from Pycharm venv's, but I have been unable to find any suggestion as to what I should be doing differently.
If anyone has experience with this and knows what I can do, you'd be a huge help!!! |
Need help compiling C++ Python project from Github for Conda environment |
Compare theoretical and empirical alpha in R |
|r|rstudio|alpha| |
In Jupyter Notebook, run:
!echo y | jupyter kernelspec uninstall unwanted-kernel
In the [Anaconda][1] prompt, run:
jupyter kernelspec uninstall unwanted-kernel
[1]: https://en.wikipedia.org/wiki/Anaconda_(Python_distribution)
|
null |
null |
null |
null |
There are two ways which is what I found. Either go to the directory where kernels are residing and delete from there. Secondly, using this command below.
List all kernels and grab the name of the kernel you want to remove
jupyter kernelspec list
to get the paths of all your kernels.
Then simply uninstall your unwanted-kernel
jupyter kernelspec remove kernel_name
|
I am trying to make a leaderboard but is says Interaction has already been acknowledged, and I am very confused on how to fix this, if someone could help that would be awesome!
```
const { Client, Intents, MessageEmbed, MessageActionRow, MessageButton } = require('discord.js');
const fs = require('fs');
const client = new Client({
intents: [
Intents.FLAGS.GUILDS,
Intents.FLAGS.GUILD_MESSAGES,
Intents.FLAGS.GUILD_MESSAGE_REACTIONS,
Intents.FLAGS.GUILD_MESSAGE_TYPING,
Intents.FLAGS.DIRECT_MESSAGES,
],
});
const { token, prefix } = require('./config.json');
// Check if leaderboard file exists
let leaderboard = [];
if (fs.existsSync('leaderboard.json')) {
// If file exists, read the file and parse it to JSON
leaderboard = JSON.parse(fs.readFileSync('leaderboard.json', 'utf8'));
} else {
// If file doesn't exist, create a new file with an empty array
fs.writeFileSync('leaderboard.json', JSON.stringify([]));
}
let currentPage = 0; // Define currentPage at a higher scope
let leaderboardMessage = null;
// Function to update leaderboard
function updateLeaderboard() {
// Sort leaderboard by points in descending order
leaderboard.sort((a, b) => b.points - a.points);
// Create embed for leaderboard
const embed = new MessageEmbed()
.setTitle(`Leaderboard - Page ${currentPage + 1}`)
.setColor('#0099ff')
.setDescription('Tournament Leaderboard');
// Add players to the embed
leaderboard.slice(currentPage * 10, (currentPage + 1) * 10).forEach((player, index) => {
embed.addField(`#${currentPage * 10 + index + 1} ${player.name}`, `Points: ${player.points}\nWins: ${player.wins}\nLosses: ${player.losses}`);
});
// Create buttons for navigation
const row = new MessageActionRow()
.addComponents(
new MessageButton()
.setCustomId('previous')
.setLabel('Previous')
.setStyle('PRIMARY'),
new MessageButton()
.setCustomId('next')
.setLabel('Next')
.setStyle('PRIMARY'),
);
// Find and update existing leaderboard message
const channel = client.channels.cache.get('1211897926276751398');
if (leaderboardMessage) {
leaderboardMessage.edit({ embeds: [embed], components: [row] });
} else {
channel.send({ embeds: [embed], components: [row] }).then(message => {
leaderboardMessage = message;
});
}
// Write the updated leaderboard back to the file
fs.writeFileSync('leaderboard.json', JSON.stringify(leaderboard));
}
// Function to add player to leaderboard
function addPlayerToLeaderboard(name) {
const playerExists = leaderboard.find(player => player.name === name);
if (!playerExists) {
leaderboard.push({ name, points: 0, wins: 0, losses: 0 });
}
}
// Bot ready event
client.once('ready', () => {
console.log('Bot is ready!');
updateLeaderboard();
setInterval(updateLeaderboard, 10 * 60 * 1000); // Update leaderboard every 10 minutes
});
// Bot message event
client.on('message', message => {
if (!message.content.startsWith(prefix) || message.author.bot) return;
const args = message.content.slice(prefix.length).trim().split(/ +/);
const command = args.shift().toLowerCase();
if (command === 'enterlb') {
addPlayerToLeaderboard(message.author.username);
updateLeaderboard();
}
});
// Bot interaction event
client.on('interaction', interaction => {
if (!interaction.isButton()) return;
if (interaction.customId === 'previous') {
if (currentPage > 0) {
currentPage--;
}
} else if (interaction.customId === 'next') {
if (currentPage < Math.ceil(leaderboard.length / 10) - 1) {
currentPage++;
}
}
updateLeaderboard();
interaction.reply({ content: 'Page updated!', ephemeral: true });
});
client.login(token);
```
I have tried many things but still can't figure it out, I have tried making it use a json file leaderboard.json but that did not seem to work. |
Interaction has already been acknowledged when trying to make leaderboard script |
|discord.js|leaderboard| |
null |
The following code does not execute any test
let testCases = [];
beforeAll(async () => {
input = {description:"x", queryStatement:"", expectedStatusCode:200,assertStatement:""}
testCases.push(input);
});
testCases.forEach(({ description, queryStatement, expectedStatusCode, assertStatement}) => {
describe(`GraphQL > query: ${description} with query ${queryStatement}`, () => {
it(`should return status code ${expectedStatusCode}`, async () => {
const token = await fetchData(username, password);
// Encode credentials
const response = await axios.post('https://fadev.fasolutions.com/graphql', {query: queryStatement}, {
headers: {'Authorization': `Bearer ${token.access_token}`}
});
expect(response.status).toBe(expectedStatusCode);
//console.log(response.data);
eval(assertStatement);
});
});
});
I expected to execute the `beforeAll` to fill the list of `testCases`, but nothing happened. |
I have research back and forth on this issue and so far nothing has worked. I'm unable to open sublime while using git using a command such as sublime . or subl .
I have used other command on git but it shows in config --list but it is not opening. |
Is there a way to make sublime as core.editor (git) on Mac |
|git|sublimetext3|editor|git-bash|text-editor| |
null |
Your fairly "verbose" code can be written a lot shorter if you leave out all the repetitions:
<!-- begin snippet:js console:true -->
<!-- language:lang-html -->
A<button id="na">1</button>
B<button id="nb">1</button>
X<button id="nx">1</button>
Y<button id="ny">1</button>
Z<button id="nz">1</button>
<div id="highest"></div>
<!-- language:lang-js -->
const btns=[...document.querySelectorAll("button")],
highest=document.getElementById("highest");
btns.forEach(b=>b.onclick=()=>{
b.textContent=+b.textContent+1;
highest.textContent="The most popular button ID is "+btns.reduce((a,c)=>(+c.textContent>+a.textContent?c:a)).id
});
setInterval(()=>btns[Math.floor(btns.length*Math.random())].click(), 1000);
<!-- end snippet -->
The above snippet does not use local storage as this is not supported on Stackoverflow. If you require this feature, then the local storage command should be included within the `onclick` function definition.
|
I'm having trouble with special characters.
It worked:
```
x <- 1:10
y <- rnorm(10)
plot(x, y, main = "Main Title")
```
It doesn't work:
```
x <- 1:10
y <- rnorm(10)
plot(x, y, main = "Main Tütle")
```
`Error: unexpected symbol`
Plus, I can not knit, I can not run basic Shiny codes. So on. I can't even save a basic plot to my desktop (File path name has also special characters).
<li>R Version: 4.3.2</li>
<li>Windows language: Turkish</li> |
Special Characters Issue in R 4.3.2 |
|r|utf-8| |
I'm trying to build ffmpeg with Rust with the library ffmpeg-next and staticlly link ffmpeg on Windows inside msys2 `ucrt64` environment,
When compile with `cargo build --release`
It throws many undefined reference errors
```
undefined reference to `ENGINE_load_builtin_engines'
undefined reference to `GdipBitmapSetPixel'
undefined reference to `EVP_PKEY_CTX_free'
```
The setup:
```shell
pacman -S --needed $MINGW_PACKAGE_PREFIX-{rust,ffmpeg}
mkdir project
cd project
cargo init --bin
cargo add ffmpeg-next -F static
```
The entrypoint `main.rs`
```rust
use ffmpeg_next;
fn main() {
ffmpeg_next::init().unwrap();
}
```
Build command
```console
cargo build --release
``` |
Static link ffmpeg with Rust in msys2 |
|rust|ffmpeg|compilation| |
I'm following the Next.js 14 Learning Tutorial ([https://nextjs.org/learn/dashboard-app/fetching-data][1]) and I'm able to do everything they show.
To show data in the frontend they first create a file with the definitions of our data:
**definitions.tsx**
// This file contains type definitions for your data.
// It describes the shape of the data, and what data type each property should accept.
// For simplicity of teaching, we're manually defining these types.
// However, these types are generated automatically if you're using an ORM such as Prisma.
export type User = {
id: string;
name: string;
email: string;
password: string;
};
then in another file, we have the query:
**data.tsx**
import { sql } from '@vercel/postgres';
import { unstable_noStore as noStore } from 'next/cache';
import { User } from './definitions';
export async function fetchUsers() {
noStore();
try {
const data = await sql<User>`SELECT * FROM users LIMIT 5`;
const result = data.rows;
return result;
} catch (error) {
console.error('Database Error:', error);
throw new Error('Failed to fetch data.');
}
}
the ui with the fetched data is created in another file.
I will write the example they give first:
**latest-invoices.tsx**
import { ArrowPathIcon } from '@heroicons/react/24/outline';
import clsx from 'clsx';
import Image from 'next/image';
import { lusitana } from '@/app/ui/fonts';
import { LatestInvoice } from '@/app/lib/definitions';
export default async function LatestInvoices({
latestInvoices,
}: {
latestInvoices: LatestInvoice[];
}) {
return (
<div className="flex w-full flex-col md:col-span-4 lg:col-span-4">
<h2 className={`${lusitana.className} mb-4 text-xl md:text-2xl`}>
Latest Invoices
</h2>
<div className="flex grow flex-col justify-between rounded-xl bg-gray-50 p-4">
{<div className="bg-white px-6">
{latestInvoices.map((invoice, i) => {
return (
<div
key={invoice.id}
className={clsx(
'flex flex-row items-center justify-between py-4',
{
'border-t': i !== 0,
},
)}
>
...
it's way more complex than what I want to do. I want to display the user data in rows in the page. For that I created a simple file like that:
**user-list.tsx**
import { User } from '@/app/lib/definitions';
export default async function UserList({
userList,
}: {
userList: User[];
}) {
userList.map((variable, i) => {
return (
<p>
{variable.id} <br />
{variable.name} <br />
{variable.email} <br />
{variable.password}
</p>
)});
}
Using that I receive the following error message:
Error: Cannot read properties of undefined (reading 'map')
6 | userList: User[];
7 | }) {
> 8 | userList.map((variable, i) => {
| ^
9 | return (
10 | <p>
11 | {variable.id} <br />
As a matter of fact, I'm not very aware what this chunck of the code (latest-invoices.tsx) makes and it will be great if someone could explain it to me. Is it specific of TypeScript?
export default async function LatestInvoices({
latestInvoices,
}: {
latestInvoices: LatestInvoice[];
}) {
and in my file (user-list.tsx), I tried to copy this logic above without knowing well what I was doing and for sure the problem is here. It's mapping somehow the array from the users query, but I'm not sure if the data is fit to an array...
export default async function UserList({
userList,
}: {
userList: User[];
}) {
userList.map((variable, i) => {
So I'd appreciate if you could show me the simpliest way to display the user data like this:
id01 - UserName01 - email01 - password01
id02 - UserNmae02 - email02 - password02
etc.
[1]: https://nextjs.org/learn/dashboard-app/fetching-data
|
I have many json files (example below), I am interested in converting these json to a table version in the csv file format [example below] with the help of python script from jupyter notebook provided my one of my collaborator [example below]. The notebook works on extracting one gene_name along with summary, and scores, but I am looking to extract two gene_name columns. Primarily, from the json file, I am interested in extracting the "gene_name" field, and make it as two separate columns such as gene_name_A, and gene_name_B, Summary_Gene_A, and Summary_Gene_B from "brief_summary" field, and scores for all the 6 statements in json file (well_characterized, "biology", cancer", tumor, drug_targets, clinical_marker).
## Example json file:
```
{
"KAT2A": {
"official_gene_symbol": "KAT2A",
"brief_summary": "KAT2A, also known as GCN5, is a histone acetyltransferase that plays a key role."
},
"E2F1": {
"official_gene_symbol": "E2F1",
"brief_summary": "E2F1 is a member of the E2F family of transcription factors."
},
"The genes encode molecules that are well characterized protein-protein pairs": 7,
"The protein-protein pair is relevant to biology": 8,
"The protein-protein pair is relevant to cancer": 3,
"The protein-protein pair is relevant to interactions between tumor": 4,
"One or both of the genes are known drug targets": 5,
"One or both of the genes are known clinical marker": 6
}
```
## Example expected csv file:
``` r
#> gene_name_A gene_name_B Summary_Gene_A
#> 1 KAT2A E2F1 KAT2A, also known as GCN5.
#> 2 KRT30 KRT31 Keratin 30 is a cytokeratin.
#> Summary_Gene_B well_characterized
#> 1 E2F1 is a member of the E2F family of transcription. 7
#> 2 Keratin 31 is a cytokeratin. 7
#> Biology Cancer Tumor drug_targets clinical_biomarkers
#> 1 8 3 4 5 6
#> 2 8 2 3 1 5
```
## Jupyter notebook
```
# test azure
import sys, time, json
# from openai import OpenAI
import pandas as pd
import re
from glob import glob
# define key dictionary for each question for concrete formatting
mainkey_qFLU = {'summary':'Summary',
'well characterized':'well_characterized',
'biology':'Biology',
'cancer':'Cancer',
'tumor':'tumor',
'drug targets':'drug_targets',
'clinical markers':'clinical_markers'
}
def find_keyword(sline, keyLib):
for mk in keyLib.keys():
# Regular expression pattern to find all combinations of the letters in 'gene'
pattern = r'{}'.format(mk)
# Finding all matches in the sample text
matches = re.findall(pattern, sline, re.IGNORECASE)
if matches:
return keyLib[mk]
else:
next
return False
def convert_stringtodict(lines, keylib):
dict_line = {}
for k in lines:
ksplit = k.split(":")
if len(ksplit) ==2:
key_tmp = find_keyword(ksplit[0].strip("\'|\"|', |").strip(), keylib)
val_tmp = ksplit[1].strip("\'|\"|',|{|} ").strip()
if key_tmp and val_tmp:
if key_tmp == "Summary":
dict_line[key_tmp] = val_tmp
else:
try:
dict_line[key_tmp] = float(val_tmp)
except:
dict_line[key_tmp] = 0
else:
next
# print ("error in ", ksplit)
return dict_line
def get_qScore(q, question_dict, subkey):
q_scores = []
for gname in q.keys():
for model in q[gname].keys():
kx = convert_stringtodict(q[gname][model],question_dict)
kx.update({'gene_name':gname,
'runID':model,
"model_version":model.lstrip("datasvc-openai-testinglab-poc-").split("_")[0],
"subjectKey":subkey,})
q_scores.append(kx)
print (len(q_scores))
return pd.DataFrame(q_scores)
q1_DF = pd.DataFrame()
for k in glob("/Users/Documents/Projects/json_files/*.json"):
subkey_name = "-".join(k.split("/")[-1].split("_")[1:3])
print (k)
q1 = json.load(open(k,'r'))
tmpDF = get_qScore(q1,mainkey_qFLU, subkey=subkey_name)
q1_DF = pd.concat([q1_DF,tmpDF],axis=0)
print (q1_DF.shape)
q1_DF.to_csv("./Score_parse_1_2.csv")
``
I tried using the jupyter notebook [code mentioned above] that works on extracting one gene_name along with summary, and scores, but I am looking to extract two gene_name columns. |
Error while converting json file to a csv file |
|python|json|csv|jupyter-notebook| |
null |
Creating a script to ease the way of reading Apache Airflow data, I was able to retrieve failed tasks and other data, but could not find a way to get errors logs for the task that failed.
I looked through the docs for any possible endpoints that could help. Looking to get something back like this:
raise AirflowException(
airflow.exceptions.AirflowException: Bash command failed. The command returned a non-zero exit code 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
When I request logs I usually get a overview of what was done to the DAG.
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#tag/Role |
Retrieve Error Logs for Task Failure within Apache Airflow with STABLE REST API |
|python|api|rest|apache|airflow| |
null |
Would anyone suspect why my React Native app on the emulator just suddenly stop working and show a "No connection to the Metro server error" out of the blue?
The screenshots refer to the same error. The longer message one is what comes up after I refresh the emulator.
[First error](https://i.stack.imgur.com/YvkNP.png)
[Second error after refresh](https://i.stack.imgur.com/PG4fT.png)
System: MacOS 14.2.1
Android Studio: Flamingo 2022.2.1 Patch 1
Node: 18.15.0
I can give setting of the emulator too but don't know how relevant they are since the emulator has worked 75% of the times
I have still to find a pattern on when it decides to go awol on me but here's what has worked in the past.
- First time it happened: Just restarting the emulator and the Metro server worked
- Second time it happened: Just restarting the emulator and the Metro server DID NOT work. I had to run adb reverse tcp:8081 tcp:8081 which worked.
- Third time it happened(now): None of the above works in any capacity or order
Disclaimer: I have tried restarting everything, restarting by clearing cache on npx react-native, closing the emulator, closing the Metro server, closing the terminals, killing the node process manually with kill -9, deleting node modules and installing again.
I didn't do anything to mess with the server or app, some changes I did I reverted to see if it what was bothering it (not so likely because the change was simply on one of the components, nothing that would affect networking in any way).
This is my very first time working with React Native and emulators so of course there is a million things I could have done wrong, but this thing bamboozles me because it works fine one time and then suddenly not. |
|python|c++|python-3.x|anaconda|conda| |
I'm trying to Implement a K-means algorithm, with semi-random choosing of the initial centroids.
I'm using Python as a way to process the data using numpy to choose initial centers and stable API in order to implement the iterative part of K-means in C.
However, when I am entering relatively large datasets, I get Segmentation Error (core dumped), so far I tried to manage memory better and free all the Global array before go back to python, also I tried to free all local array before end of the function.
this is the code in python:
```
def Kmeans(K, iter , eps ,file_name_1, file_name_2):
compound_df = get_compound_df(file_name_1,file_name_2)
N , d = int(compound_df.shape[0]) , int(compound_df.shape[1])
data = np.array(pd.DataFrame.to_numpy(compound_df),dtype=float)
assert int(iter) < 1000 and int(iter) > 1 and iter.isdigit() , "Invalid maximum iteration!"
assert 1 < int(K) and int(K) < N , "Invalid number of clusters!"
PP_centers = k_means_PP(compound_df,int(K))
actual_centroids = []
for center_ind in PP_centers:
actual_centroids.append(data[center_ind])
actual_centroids = np.array(actual_centroids,dtype=float)
data = (data.ravel()).tolist()
actual_centroids = (actual_centroids.ravel()).tolist()
print(PP_centers)
print(f.fit(int(K),int(N),int(d),int(iter),float(eps),actual_centroids,data))
```
this is the code in C, that manages the Pyobject creation, this is the python object being returned to the Kmeans function
```
PyObject* convertCArrayToDoubleList(double* arr){
int i, j;
PyObject* K_centroid_list = PyList_New(K);
if(!K_centroid_list)
return NULL;
for(i=0;i<K;++i){
PyObject* current_center = PyList_New(d);
if(!K_centroid_list){
Py_DECREF(K_centroid_list);
return NULL;
}
for(j=0;j<d;++j){
PyObject* num = PyFloat_FromDouble(arr[i*d+j]);
if(!num){
Py_DECREF(K_centroid_list);
Py_DECREF(current_center);
return NULL;
}
PyList_SET_ITEM(current_center,j,num);
}
PyList_SET_ITEM(K_centroid_list,i,current_center);
}
return K_centroid_list;
}
```
I ran valgrind on some samples, there were some leaks of memory but I couldn't identify the leak...
I also tried various freeing and Py_DECREF combinations and attempt to reduce the leakage, but with no avail...
Thanks for helping! |
Python and stable API Segmentation Error (core dumped) |
|python|c|memory-management|cpython| |
null |
I gave up trying to use class based tasks. No idea how to get that to work.
I ended up using task_success, to perform operations after a task completes.
```
@task_success.connect
def task_success_handler(sender=None, result=None, **kwargs):
if sender.name == 'customer':
# Extract result dictionary
notification_url = result['notification_url']
job_id = result['job_id']
job_status = result['status']
task_id = sender.request.id
task_state = celery.AsyncResult(task_id).state
logger.info(f"Task {task_id}: {task_state}, job_id: {job_id}: {job_status}")
# send async response
send_async_response(db, notification_url, task_id, job_id, "200", job_status, task_state, f"Task succeeded for customer.")
@celery.task(name='customer')
def backend(jobs, job_id, backend_config, method, auth_user, notification_url, sleep_time=0):
logger.info(f"sleeping for {str(sleep_time)} sec")
time.sleep(int(sleep_time))
outcome = customer_work(jobs, job_id, backend_config, method, auth_user, notification_url)
return outcome
``` |
You cannot directly use `localStorage` to init the state value because as you said `localStorage` is not available on the server side.
You can init the state with a default value and use an `effect` to update its value as soon as the app is loaded on client side.
```
const [color, setColor] = useState<...>('blue');
useEffect(() => {
setColor(localStorage?.getItem('preferred_color') ?? 'blue');
}, []);
useEffect(() => {
localStorage?.setItem('preferred_color', color);
}, [color]);
return (
<ColorsContext.Provider value={{ color, setColor }}>
{children}
</ColorsContext.Provider>
);
``` |
for me:
@EnableWebSecurity
public class WebSecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
// ...
http.cors(Customizer.withDefaults()); // disable this line to reproduce the CORS 401
return http.build();
}
} |
I have a multipolygon shapefile that i want to rotate. I can do the rotation but the problem is that the roatation changes the inner vertices. This creates overlap of polygon which i dont want.
This is what i have tried.
import geopandas as gpd
input_poly_path = "poly3.shp"
gdf = gpd.read_file(input_poly_path)
explode = gdf.explode(ignore_index=True)
ex = explode.rotate(-90, origin="centroid")
g = gpd.GeoDataFrame(columns=['geometry'], geometry='geometry')
g["geometry"] = ex
[![Original polygon][1]][1]
[![Rotated polygon][2]][2]
[1]: https://i.stack.imgur.com/ndiPy.jpg
[2]: https://i.stack.imgur.com/VZWtR.jpg
https://drive.google.com/drive/folders/1HJpnNL-iXU_rReQzVcDGuyWZ8IjciKP8?usp=drive_link
Link to the polygon |
Rotate a multipolygon without changing the inner spatial relation |
|python|polygon|geopandas|multipolygons| |
I’ve been working on an application for a couple of years and received a simple design request: Round the corners on a UIView and add a drop shadow.To do as given below.
I want a custom `UIView`... : I just wanted a blank white view with rounded corners and a light drop shadow (with no lighting effect). I can do each of those one by one but the usual `clipToBounds`/`maskToBounds` conflicts occur.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/zp17N.png |
null |
The [`Serializer`](https://github.com/symfony/serializer/blob/7.0/Serializer.php) class is not final, so you can extend it and override the `denormalize()` method:
``` lang-php
require('vendor/autoload.php');
use Symfony\Component\Serializer\Serializer;
use Symfony\Component\Serializer\Encoder\JsonEncoder;
use Symfony\Component\Serializer\Exception\RuntimeException;
use Symfony\Component\Serializer\Normalizer\ObjectNormalizer;
class CheckingSerializer extends Serializer
{
public function denormalize(mixed $data, string $type, ?string $format = null, array $context = []): mixed
{
$obj = parent::denormalize($data, $type, $format, $context);
$ref = new \ReflectionObject($obj);
foreach($ref->getProperties() as $property)
{
if(!$property->isInitialized($obj))
throw new RuntimeException(sprintf('Missing attribute when denormalizing to %s type: %s', $type, $property->getName()));
}
return $obj;
}
}
class SimpleDto
{
public string $name;
public int $value;
}
$json = '{"name":"Jane"}';
$serializer = new CheckingSerializer([new ObjectNormalizer()], [new JsonEncoder()]);
try
{
$dto = $serializer->deserialize($json, SimpleDto::class, 'json');
}
catch(\Exception $ex)
{
echo $ex->getMessage();
}
```
Output:
``` lang-none
Missing attribute when denormalizing to SimpleDto type: value
```
|
Django dynamic pages. Why is my code not working? |
|django|django-models|django-views| |
null |
A property is not an instance attribute, it is a [descriptor][1] attribute of the class which is bound to the object which accessed it. Here is a simplified example:
class A:
@property
def x(self):
return True
print(A.x) # <property object at 0x0000021AE2944548>
print(A().x) # True
When getting `A.x`, you obtain the unbound property. When getting `A().x`, the property is bound to an instance and the `A.x.__get__` returns a value.
## Back to your code
What this means is that a property *must be a class attribute*, not an instance attribute. Or, in your case, the property must be a *metaclass* attribute.
class Meta(type):
@property
def abstract(cls):
return getattr(cls, '_abstract', False)
@abstract.setter
def abstract(cls, value):
if value in (True, False):
setattr(cls, '_abstract', value)
else:
raise TypeError(f'invalid abstract assignment {value}')
class Foo(metaclass=Meta):
pass
print(Foo.abstract) # False
Foo.abstract = 'Not at bool' # TypeError: invalid abstract assignment Not at bool
Although, this only partially solves your issue as you want every method of you classes to have an `abstract` property. To do so, you will need *their class* to have that `property`.
Before we go any deeper, let me recall you one key concept in Python: *we're all consenting adults here*. Maybe you should just trust your users not to set `abstract` to anything else than a `bool` and let it be a non-property attribute. In particular, any user can change the `_abstract` attribute itself anyway.
If that does not suit you and you want abstract to be a `property`, then one way to do that is to define a wrapper class for methods that hold that `property`.
class Abstract:
@property
def abstract(cls):
return getattr(cls, '_abstract', False)
@abstract.setter
def abstract(cls, value):
if value in (True, False):
setattr(cls, '_abstract', value)
else:
raise TypeError(f'invalid abstract assignment {value}')
class AbstractCallable(Abstract):
def __init__(self, f):
self.callable = f
def __call__(self, *args, **kwargs):
return self.callable(*args, **kwargs)
class Meta(type, Abstract):
def __new__(mcls, name, bases, dct, **kwargs):
for name, value in dct.items():
if callable(value):
dct[name] = AbstractCallable(value)
return super().__new__(mcls, name, bases, dct)
class Foo(metaclass=Meta):
def bar(self):
pass
print(Foo.abstract) # False
print(Foo.bar.abstract) # False
Foo.bar.abstract = 'baz' # TypeError: invalid abstract assignment baz
[1]: https://docs.python.org/3.6/howto/descriptor.html#definition-and-introduction |
In the Android app "Lua Interpreter" I want to extract wiki text on demand.
I'm going to use the additional system command "curl".
curl -s "https://uk.wikipedia.org/w/api.php?action=query&format=json&prop=extracts&exintro&explaintext&titles=%D0%91%D0%B0%D1%80%D0%B0%D1%85%D1%82%D0%B8" > "/storage/emulated/0/iGO_Pal/save/wiki_data.lua"
It works in TerMux" for Android successfully. Only one thing: the target file must exist!!
Request
https://uk.wikipedia.org/w/api.php?action=query&format=json&prop=extracts&exintro&explaintext&titles=%D0%91%D0%B0%D1%80%D0%B0%D1%85%D1%82%D0%B8
successful in the browser.
If I try to use this request
os.execute('curl -s "https://uk.wikipedia.org/w/api.php?action=query&format=json&prop=extracts&exintro&explaintext&titles=%D0%91%D0%B0%D1%80%D0%B0% D1%85%D1%82%D0%B8" > "/storage/emulated/0/iGO_Pal/save/wiki_data.lua"')
in the Lua interpreter - nothing happens.
The file is just empty.
All necessary permissions are in the Lua interpreter!
What could be the problem? |
One alternative is doing the following using `ifelse`:
> data.frame(
+ 't' = c(1, 2, 3, 4),
+ 'v' = c(2, 3, 4, 3)
+ ) |>
+ mutate(status = ifelse(t < v, "Increasing", "Decreasing"))
t v status
1 1 2 Increasing
2 2 3 Increasing
3 3 4 Increasing
4 4 3 Decreasing
Other option using `case_when`:
> data.frame(
+ 't' = c(1, 2, 3, 4),
+ 'v' = c(2, 3, 4, 3)
+ ) |>
+ mutate(status = case_when(t < v ~ "Increasing",
+ TRUE ~ "Decreasing"))
t v status
1 1 2 Increasing
2 2 3 Increasing
3 3 4 Increasing
4 4 3 Decreasing
|
Making a column nullable **is** making it optional.
In PostgreSQL, a NULL value uses no extra storage space. |
I am in the process of importing some data where the descriptions are essentially the same, but may have some small variation. I started with adding substitute and regreplace to deal with these. The formula is getting a little long so wanted to know if I could replace the specific substitute and regreplace in the formula bar and rather use a reference to a range which has all the replacements in.
For example
```
Received Text:
`Apple red
Apple green
Apple yellow`
Output required:
`Apple`
```
if it was the same pattern e,g, apple and then colour, and all I wanted was apple, easy enough, but as I get more data, new patterns emerge that I need to deal with, so each time I need to add to the formula.
The second Example with 2 patterns
```
Received Text
Apple red
Apple green
car-001-aa
bike-001-aa
Output required:
Apple
Car
bike
```
so now I need to deal with removing colour if it starts with apple or if the string contains -001-aa, remove the -001-aa
here is the formula I have at the moment:
```
=ARRAYFORMULA(if(F2:F ="",,if(J2:J&" "&G2:G=$M$2,$N$2,Trim ( REGEXREPLACE (SUBSTITUTE (SUBSTITUTE(REGEXREPLACE(lower(F2:F)," on [0-9]{2} [a-zA-Z]{3}",),"clp",""),"bcc",""), "monzo-[A-Za-z]{5} ft", " savings pot")))))
```
as you can see I am currently dealing with 4 different patterns so what I wanted to know is if could I replace:
```
( REGEXREPLACE (SUBSTITUTE (SUBSTITUTE(REGEXREPLACE(lower(F2:F)," on [0-9]{2} [a-zA-Z]{3}",),"clp",""),"bcc",""), "monzo-[A-Za-z]{5} ft", " savings pot"))
```
with a reference to a range which would contain the patterns and function to use e.g.
```
Cell A1: REGEXREPLACE(" on [0-9]{2} [a-zA-Z]{3}",)
Cell A2: SUBSTITUTE("clp","")
Cell A3: SUBSTITUTE("bcc","")
Cell A4: REDEXREPLACE("monzo-[A-Za-z]{5} ft", " savings pot")
Cell A5: new pattern
```
and so on and so on.
so if I used the Apple example, I would have a separate range which would include all the patterns to test for
e.g.
1. Replace (Apple).*, Apple"
2. Remove string = -001-aa
Later on if I see a new pattern I could just add it to the list of patterns and the formula would include the new pattern
so in my main sheet, I would have a formula that be something like B1 = patterns(A1)
Thanks in advance for any help or pointers.
|
I'm trying to record a video from the Camera using `AVAssetWriter`.
The video recording works perfectly fine, but I cannot figure out how to add GPS location EXIF tags to the video files (mp4/mov).
I tried this:
```swift
func initMetadataWriter() throws {
let locationSpec = [
kCMMetadataFormatDescriptionMetadataSpecificationKey_Identifier: AVMetadataIdentifier.quickTimeMetadataLocationISO6709,
kCMMetadataFormatDescriptionMetadataSpecificationKey_DataType: kCMMetadataDataType_QuickTimeMetadataLocation_ISO6709,
] as [String: Any]
let metadataSpecifications: NSArray = [locationSpec]
var metadataFormatDescription: CMFormatDescription?
CMMetadataFormatDescriptionCreateWithMetadataSpecifications(allocator: kCFAllocatorDefault,
metadataType: kCMMetadataFormatType_Boxed,
metadataSpecifications: metadataSpecifications,
formatDescriptionOut: &metadataFormatDescription)
let metadataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: metadataFormatDescription)
guard assetWriter.canAdd(metadataInput) else {
throw CameraError.location(.cannotWriteLocationToVideo)
}
assetWriter.add(metadataInput)
metadataWriter = AVAssetWriterInputMetadataAdaptor(assetWriterInput: metadataInput)
}
private func createLocationMetadataItem(location: CLLocation) -> AVMetadataItem {
let metadataItem = AVMutableMetadataItem()
metadataItem.key = AVMetadataKey.commonKeyLocation as (NSCopying & NSObjectProtocol)?
metadataItem.keySpace = AVMetadataKeySpace.common
metadataItem.value = String(format: "%+.6f%+.6f/", location.coordinate.latitude, location.coordinate.longitude) as (NSCopying & NSObjectProtocol)?
metadataItem.identifier = AVMetadataIdentifier.quickTimeMetadataLocationISO6709
return metadataItem
}
/**
Writes a Location tag to the video
*/
func writeLocationTag(location: CLLocation) throws {
guard let metadataWriter else {
throw CameraError.location(.cannotWriteLocationToVideo)
}
let metadataItem = createLocationMetadataItem(location: location)
let metadataGroup = AVTimedMetadataGroup(items: [metadataItem],
timeRange: CMTimeRange(start: CMTime.zero, end: CMTime.positiveInfinity))
metadataWriter.append(metadataGroup)
}
```
Then this will be called like this:
```swift
// 1. Initialize all AVAssetWriters and adapters
let recordingSession = RecordingSession(...)
try recordingSession.initVideoWriter()
try recordingSession.initAudioWriter()
try recordingSession.initMetadataWriter()
// 2. Start writing
recordingSession.start()
// 3. While video/audio frames are being written, also write the GPS Location metadata:
let location = locationManager.location!
try recordingSession.writeLocationTag(location: location)
```
But this crashes with the following error:
```
-[AVAssetWriterInputMetadataAdaptor appendTimedMetadataGroup:] Cannot write to file timed metadata group 0x283ebd150: No entry in format description 0x281f3ff70 for metadata item 0x283cece80 with identifier mdta/com.apple.quicktime.location.ISO6709, data type com.apple.metadata.datatype.UTF-8 and extended language tag (null). If you created this format description using -[AVTimedMetadataGroup copyFormatDescription], make sure the instance of AVTimedMetadataGroup used to create the format description contains a representative sample of metadata items which includes an item with the same combination of identifier, dataType, and extended language tag as this one
```
Has anyone successfully written GPS location tags to video files with `AVAssetWriter` before?
My AVAssetWriters support both `.mov` and `.mp4` (h264 and h265/hevc) videos.
Thanks! |
## [Go 1.22 has now introduced this functionality natively in `net/http`][1]
This can now be achieved by calling [`req.PathValue()`][2]
> PathValue returns the value for the named path wildcard in the [ServeMux][3] pattern that matched the request. It returns the empty string if the request was not matched against a pattern or there is no such wildcard in the pattern.
---
A basic example on how to use `reqPathValue()` is:
```go
package main
import (
"fmt"
"net/http"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/provisions/{id}", func(w http.ResponseWriter, req *http.Request) {
idString := req.PathValue("id")
fmt.Printf("ID: %v", idString)
})
log.Fatal(http.ListenAndServe(":8080", mux))
}
```
[1]: https://go.dev/blog/routing-enhancements
[2]: https://pkg.go.dev/net/http#Request.PathValue
[3]: https://pkg.go.dev/net/http#ServeMux |
If you don't know the timezone, use a zone-less timestamp, which postgres supports:
```
CREATE TABLE MYTABLE (
...
LOGGED_AT TIMESTAMP WITHOUT TIME ZONE,
...
)
``` |
There are currently two articles proposals in the [Actions for You tab](https://stackoverflow.com/collectives/google-cloud?tab=dashboard) that are waiting to be reviewed by RMs.
[This one](https://stackoverflow.com/collectives/google-cloud/articles/review/78120800) is an outline for an article about security practices. It was submitted by a very new user so it would be good to make sure they are not copying this content from elsewhere on the internet. Users are allowed to republish their own content, but they would need to link to the original source and disclose their affiliation.
On [this one](https://stackoverflow.com/collectives/google-cloud/articles/review/78135043) the author did some minor revisions we suggested, but it still needs a review from someone with domain knowledge.
Is anyone able to take a look at either of these and provide some feedback to the authors? |
New article proposals to be reviewed [UPDATED March 12] |
As far as I know and read the official documentation, updates will only be triggered if the value is different.
> The set function that lets you update the state to a different value and trigger a re-render.
However, after my testing in version 18.2, there is a situation that triggers repeated rendering.

demo: https://playcode.io/1796380
I want to know what his capacity is and whether I need to be concerned about it |
I ran the code on my local device, MacOS Ventura 13.6.4 with R version 4.2.1, and had the same issues/improper joining of UUID data type. A many to many relationship occurred. If I adjust ```id1 <- UUIDgenerate(n = 999, output = "uuid")``` the join works properly. |
I have a JavaScript function which recursively calls the Spotify API in order to get data for a users saved songs such as track name, album, date added etc. (call limit is 50 so I need to repeat the call many times). It then manipulates the data into .csv format for the user to download. It is shown below (it works completely fine).
```
async function repeatGet(url, tracksArray = [["Track", "Artist", "Album", "Date-Time Added", "Spotify Link"]]) {
tracksObject = await apiCall(url);
for (track of tracksObject.items) { tracksArray.push([track.track.name, track.track.artists[0].name, track.track.album.name, track.added_at, track.track.external_urls.spotify]) }//tracksTotal.push(track) }
if (tracksObject.next != undefined) { await repeatGet(tracksObject.next, tracksArray) }
return tracksArray
}
tracksArray = await repeatGet("https://api.spotify.com/v1/me/tracks?limit=50")
console.log(tracksArray);
let csvData = "sep=\t \n";
tracksArray.forEach(row => csvData += row.join("\t") + "\n")
console.log(csvData);
```
I tried to increase the efficiency by having the called data be turned into .csv format straight away and concatenating to a string (code below):
```
async function repeatGet(url, csvData = "sep=\t \nTrack\tArtist\tAlbum\tDate-Time Added\tSpotify\tLink") {
tracksObject = await apiCall(url);
for (track of tracksObject.items) {
csvData += "\n" + track.track.name + "\t" + track.track.artists[0].name + "\t" + track.track.album.name
+ "\t" + track.added_at + "\t" + track.track.external_urls.spotify;
}
// console.log(csvData)
if (tracksObject.next != undefined) { await repeatGet(tracksObject.next, csvData) }
console.log(csvData,csvData.length);
return csvData;
}
csvData = await repeatGet("https://api.spotify.com/v1/me/tracks?limit=50");
```
For an unknown reason to me this code only returns the .csv string from the first api call (not the total string which is combined with all of the subsequent ones).
I've tried bugfixing by using console.log on certain parts and the code correctly appends to the csv as each new call is passed. However, after the final call all hell breaks loose and the console instantly returns all of the logs that happened as each subsequent call happened (with the csv string growing in size), but somehow in REVERSE order. Does anyone have any ideas? Thanks. |
In the Android app "Lua Interpreter" I want to extract wiki text on demand |
|curl|lua| |
I created an iframe component with ref and I wanted to create a unit test for that component, then I created a mock iframe so that the component could be run in the unit test, unfortunately the unit test failed, what's wrong with the following code, I'm a beginner at Jest.
So how do you make the unit test successful?
***component***
import { forwardRef, useRef, useImperativeHandle } from 'react';
const Iframe = forwardRef(function Iframe(props, ref) {
const iframeRef = useRef(null);
useImperativeHandle(ref, () => {
return {
postMessage() {
iframeRef.current.contentWindow.postMessage(JSON.stringify({data: 'hello !'}), '*')
},
};
}, []);
return <iframe {...props} ref={iframeRef} />;
});
export default Iframe;
***mock***
// __mock__/iframe.js
import { useRef } from 'react';
import Iframe from './Iframe';
export default function App() {
const ref = useRef(null);
function handleClick() {
ref.current.postMessage({data: 'hello !'});
}
return (
<div>
<Iframe src={'https://www.google.com/'} ref={ref} />
<button onClick={handleClick}>
Send Message
</button>
</div>
);
}
***jest***
import * as React from 'react'
import { render, fireEvent, screen } from '@testing-library/react'
import Iframe from '../__mock__/iframe'
describe('iframe', () => {
it('call postMessage', () => {
render(<Iframe />)
const handleClick = jest.fn()
const button = screen.getByText(/Send Message/i)
fireEvent.click(button)
expect(handleClick).toHaveBeenCalled()
})
})
***test failed***
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/uXnM2.png |
Jest click event not firing a ref |
|jestjs|ts-jest|babel-jest| |
I was playing with computed columns and I noticed some weird behaviour with persisted computed columns.
The query:
DROP TABLE IF EXISTS #tmp
CREATE TABLE #tmp
(
ExternalId UNIQUEIDENTIFIER NULL,
UniqueId UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWID()),
--Id AS ISNULL(ExternalId, UniqueId),
PersistedId AS ISNULL(ExternalId, UniqueId) PERSISTED
)
INSERT INTO #tmp (ExternalId)
VALUES (null), (NEWID())
SELECT * FROM #tmp
UPDATE #tmp
SET externalid = CASE WHEN ExternalId IS NULL THEN newid() ELSE null END
SELECT * FROM #tmp
If you run this, you get the following output:
**Just Persisted**
Select 1:
| ExternalId | UniqueId | PersistedId |
| ---------- | -------- | ----------- |
| NULL | fb544e9e-7d9b-47ec-b2ca-45484e02e343 | fb544e9e-7d9b-47ec-b2ca-45484e02e343 |
| c6b3b82a-db68-46e4-8cfe-a4a0eedad2cf | 56d0091f-0f08-49f3-a020-f73fd2074d7a | 3fc603ce-dcac-449e-8f0a-203cfda0a634 |
Select 2:
| ExternalId | UniqueId | PersistedId |
| ---------- | -------- | ----------- |
| 0ecdff52-a59c-421f-bf87-8938488b83ea | fb544e9e-7d9b-47ec-b2ca-45484e02e343 | 0ecdff52-a59c-421f-bf87-8938488b83ea |
| NULL | 56d0091f-0f08-49f3-a020-f73fd2074d7a | 56d0091f-0f08-49f3-a020-f73fd2074d7a |
You can see that the persisted column for select 1 has a different guid to that of the external id.
Now if I uncomment the `Id` line from the `CREATE TABLE` statement and run it again I get the following:
**Both**
Select 1:
| ExternalId | UniqueId | Id | PersistedId |
| ---------- | -------- | -- | ----------- |
| NULL | 1275aff9-0c59-4406-8bd5-ae694d228a6d | 1275aff9-0c59-4406-8bd5-ae694d228a6d | 1275aff9-0c59-4406-8bd5-ae694d228a6d |
| 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c | e7980647-fe4f-45a2-9d41-53da0b8d780f | 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c | 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c |
Select 2:
| ExternalId | UniqueId | Id | PersistedId |
| ---------- | -------- | -- | ----------- |
| d606ea7b-f17b-48d8-8581-82c8736bf61f | 1275aff9-0c59-4406-8bd5-ae694d228a6d | d606ea7b-f17b-48d8-8581-82c8736bf61f | d606ea7b-f17b-48d8-8581-82c8736bf61f |
| NULL | e7980647-fe4f-45a2-9d41-53da0b8d780f | e7980647-fe4f-45a2-9d41-53da0b8d780f | e7980647-fe4f-45a2-9d41-53da0b8d780f |
As you can see with this, now that the id is also getting computed the PersistedId get's the correct id. I've also tried on the "Just Persisted" inserting a static guid and it looks fine, so I'm assuming the persisted column is calling the newid() again from the insert statement. Does anyone know the work around for this and whether?
I can't imagine the externalid in the main project _(that this example code is for)_ would have a newid() used in a insert a new record, but I can't say it'll never happen.
**UPDATE**
This does seem to be a bug and not a PICNIC problem, so I've opened a Microsoft Bug: https://feedback.azure.com/d365community/idea/4611e2d2-1fd3-ee11-92bc-6045bd7aea25
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.