Search is not available for this dataset
qid
int64
1
74.7M
question
stringlengths
10
43.1k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
0
33.7k
response_k
stringlengths
0
40.5k
62,675,437
I have Free Orchestrator version of UiPath, i want to know if there is a way to start a job in the orchestrator, when the job is still running for example at 08:00 P.M i have a second job scheduled that need to run. Is possible to Stop the first job, run the second one and when the second job is finished start again with the first job? Thanks
2020/07/01
[ "https://Stackoverflow.com/questions/62675437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846695/" ]
Try console.log(req.file) instead of (req.files), since you're using uploads.single('file'). The request is stored on files (with an "s") only when you're using more than one. It should be on "file" otherwise (when it's a single upload). I think that might be what's happening here.
OK, I found the issue.. After removing the header (except the auth) it works! ("Content-Type" can't be setting manually - because we using boundary, and on the server-side it looking for the auto-generated boundary). looks like it's Vaadin-upload bug
62,675,437
I have Free Orchestrator version of UiPath, i want to know if there is a way to start a job in the orchestrator, when the job is still running for example at 08:00 P.M i have a second job scheduled that need to run. Is possible to Stop the first job, run the second one and when the second job is finished start again with the first job? Thanks
2020/07/01
[ "https://Stackoverflow.com/questions/62675437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846695/" ]
try ``` router.use(express.json());//this is required if you are sending data as json router.use(express.urlencoded({extended:false})); ```
OK, I found the issue.. After removing the header (except the auth) it works! ("Content-Type" can't be setting manually - because we using boundary, and on the server-side it looking for the auto-generated boundary). looks like it's Vaadin-upload bug
62,675,447
``` a='Kar shapath!' b='Agneepath!' print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,b*3) ``` How do I make `b` to show on the next line after `a`?
2020/07/01
[ "https://Stackoverflow.com/questions/62675447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846726/" ]
You can do this in two ways: **Using Python f-string** ``` print(f"Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{a*3} \n{b*3}") ``` **2. Using .format()** ``` print("Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}".format(a*3, b*3)) ``` **Output will be same for both of the above solutions:** ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
You need to add `sep="\n"` to the print function the `sep=` param defining the separation of different values that pass to the print function in this case `'''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3` and `b*3` ``` print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,b*3, sep="\n") ``` Output ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
62,675,447
``` a='Kar shapath!' b='Agneepath!' print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,b*3) ``` How do I make `b` to show on the next line after `a`?
2020/07/01
[ "https://Stackoverflow.com/questions/62675447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846726/" ]
You can do this in two ways: **Using Python f-string** ``` print(f"Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{a*3} \n{b*3}") ``` **2. Using .format()** ``` print("Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}".format(a*3, b*3)) ``` **Output will be same for both of the above solutions:** ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
Your either use the `.format()` function to be more robust. ``` a='Kar shapath!' b='Agneepath!' print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}'''.format(a*3 ,b*3)) ``` or use the stirng concatenation : ``` print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,'\n' + b*3) ```
62,675,448
I have a problem, up until now I have been using jspdf with autotable to create simple reports one row of headings, one body of data, works perfectly well. I am using Angular 8 However I now have a report where I have this layout required: ``` Name: Steve Age: 38 otherTitle: xxx otherTitle: xxx otherTitle: xxx --NEWLINE-- otherTitle: xxx otherTitle: xxx otherTitle: xxx otherTitle: xxx ``` essentially two lines of headers with the data to the right of them. Layout on webpage is easy, works fine. but Exporting it to pdf I'm not sure how to do it, any suggestions?
2020/07/01
[ "https://Stackoverflow.com/questions/62675448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13002679/" ]
You can do this in two ways: **Using Python f-string** ``` print(f"Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{a*3} \n{b*3}") ``` **2. Using .format()** ``` print("Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}".format(a*3, b*3)) ``` **Output will be same for both of the above solutions:** ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
You need to add `sep="\n"` to the print function the `sep=` param defining the separation of different values that pass to the print function in this case `'''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3` and `b*3` ``` print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,b*3, sep="\n") ``` Output ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
62,675,448
I have a problem, up until now I have been using jspdf with autotable to create simple reports one row of headings, one body of data, works perfectly well. I am using Angular 8 However I now have a report where I have this layout required: ``` Name: Steve Age: 38 otherTitle: xxx otherTitle: xxx otherTitle: xxx --NEWLINE-- otherTitle: xxx otherTitle: xxx otherTitle: xxx otherTitle: xxx ``` essentially two lines of headers with the data to the right of them. Layout on webpage is easy, works fine. but Exporting it to pdf I'm not sure how to do it, any suggestions?
2020/07/01
[ "https://Stackoverflow.com/questions/62675448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13002679/" ]
You can do this in two ways: **Using Python f-string** ``` print(f"Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{a*3} \n{b*3}") ``` **2. Using .format()** ``` print("Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}".format(a*3, b*3)) ``` **Output will be same for both of the above solutions:** ``` Tu na thakega kabhi, Tu na thamega kabhi, Ek patra chhah bhi Kar shapath!Kar shapath!Kar shapath! Agneepath!Agneepath!Agneepath! ```
Your either use the `.format()` function to be more robust. ``` a='Kar shapath!' b='Agneepath!' print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n{} \n{}'''.format(a*3 ,b*3)) ``` or use the stirng concatenation : ``` print('''Tu na thakega kabhi, \nTu na thamega kabhi, \nEk patra chhah bhi \n'''+a*3 ,'\n' + b*3) ```
62,675,456
I faced with some problem while developing Next.js application. It is rather an architectural issue. I want to switch between routes, but keeping all the states on the page so that I can return to page without loosing it's state. I understand that I need to use initalProps at the top level. But this is only suitable for simple cases. Let's take an example where there're hundreds of states on a page with different levels of hierarchy. Is it possible to make a snapshot of all page states? I look towards [memo](https://reactjs.org/docs/react-api.html#reactmemo) from React. Also, I think Redux would help me, but I don't use it in the application at all and it's one more dependency. Perhaps this can be solved using the [Context Api](https://reactjs.org/docs/context.html).
2020/07/01
[ "https://Stackoverflow.com/questions/62675456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10586534/" ]
If you have global state that you need to share between different pages and the state is **simple** you can use `Context API`, the `Component` needs to be wrapped by the `ContextProvider` in the custom app component in `_app.js`. The pseudo-code might look something like this for your use case, ```js // context and the provider component export const MyContext = React.createContext(); export const MyCtxProvider = ({ children }) => { // state logic return ( <MyContext.Provider value={/* state value */}> {children} </MyContext.Provider> ) } ``` in `_app.js` ```js import App from 'next/app' import { MyCtxProvider } from './path-to-myctxprovider' class MyApp extends App { render() { const { Component, pageProps } = this.props return ( <MyCtxProvider> <Component {...pageProps} /> </MyCtxProvider> ) } } export default MyApp ``` Here is an [example](https://github.com/vercel/next.js/tree/canary/examples/with-context-api) from nextjs repo that uses context API to share state between pages. But if your global state is somewhat **complex** i.e. you want to keep both server state(fetched data from a remote API) and UI state in the global state, you might wanna use a state management system like `redux` in this case. There are a lot of [examples](https://github.com/vercel/next.js/tree/canary/examples) in the nextjs repo showing how to implement it. You can start from there.
As far as I understand, it will be an architectual problem. For global state management you need to use e.g: Redux, graphql, `ContextAPI`, or give a global state to your app in `pages/_app.js`. That will wrap your pages, and provide a cross pages state.(You can modify, and reuse that) **Opnion**: Implement `Redux`, if it"s really need. (for large amount of data in the state). Because it"s easier to implement, then remove it.
62,675,459
Given a string like this: ``` 2020-08-14 ``` How do I convert it to: ``` 14 August 2020 ``` Using python 3?
2020/07/01
[ "https://Stackoverflow.com/questions/62675459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3702643/" ]
You can use the `datetime` module for reformatting date strings. Using `strptime` you can read from a string into a `datetime` object, then using `strftime` you can convert back to a string with a different format. ``` >>> from datetime import datetime >>> d = datetime.strptime('2020-08-14', '%Y-%m-%d') >>> d datetime.datetime(2020, 8, 14, 0, 0) >>> d.strftime('%d %B %Y') '14 August 2020' ```
``` monthDict = { '01' : 'January', '02' : 'February', '03' : 'March', '04' : 'April', '05' : 'May', '06' : 'June', '07' : 'July', '08' : 'August', '09' : 'September', '10' : 'October', '11' : 'November', '12' : 'December' } readDate = input('Pleaes enter the date in the yyyy-mm-dd format: ') day = readDate.split('-')[2] month = readDate.split('-')[1] year = readDate.split('-')[0] print('Printing in the dd month year format: ', day, monthDict[month], year) ```
62,675,459
Given a string like this: ``` 2020-08-14 ``` How do I convert it to: ``` 14 August 2020 ``` Using python 3?
2020/07/01
[ "https://Stackoverflow.com/questions/62675459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3702643/" ]
An alternative approach using pandas function below: ``` import pandas as pd d = pd.to_datetime('2020-08-14') d.strftime('%d %B %Y') Out[11]: '14 August 2020' ```
``` monthDict = { '01' : 'January', '02' : 'February', '03' : 'March', '04' : 'April', '05' : 'May', '06' : 'June', '07' : 'July', '08' : 'August', '09' : 'September', '10' : 'October', '11' : 'November', '12' : 'December' } readDate = input('Pleaes enter the date in the yyyy-mm-dd format: ') day = readDate.split('-')[2] month = readDate.split('-')[1] year = readDate.split('-')[0] print('Printing in the dd month year format: ', day, monthDict[month], year) ```
62,675,477
This is more theoretical question. I've seen couple examples with `std::condition_variable` and it seems that it gets threads to sleep until a condition will be satisfied. It is some kind of a flag on races and makes sense. But it seems that if a variable is always changed then a thread may be woken up for checking condition and fall asleep again, because it does not keep up to catch the situation when predicate is true and so I may put the thread in "coma". So, is `std::condition_variable` unsafe when I use it in the such situation when I always change it?
2020/07/01
[ "https://Stackoverflow.com/questions/62675477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8741818/" ]
You appear to be talking about some kind of 'livelock'. A deadlock is a situation where one or more threads is making no progress waiting on the release of a lock. Livelock is similar but the state of the process is constantly changing but still none is progressing. We can also talk about 'practically livelocked' where a thread gets insufficient opportunity to make adequate progress. To an observer livelock and practical livelock often look very like genuine deadlock. You should design your program logic to avoid livelock. It can be non-trivial. For example many forms of lock aren't 'fair'. That is when a number of threads are waiting on a lock that is released the lock that requested it first isn't guaranteed to receive it next. In fact many operating system locks are intrinsically unfair for example giving locks to the threads that are easier to wake up (e.g. loaded to a core and suspended) rather than harder (unloaded from a core and require reloading to resume execution). The question doesn't give many details so it's difficult diagnose the situation. If some particular thread (or class of thread) requires priority you might introduce a flag that tells low priority threads to not acquire the lock if a priority thread is waiting (and can run) such as `(c && !(p && priority_waiting))` where `c` is the logical condition for the low priority thread and `p` is the logical condition for the priority thread. You should certainly avoid logic where a thread waits on a potentially transient condition. Suppose you have some monitor thread that produces output ever 1000 cycles. A waiting condition like `(cycles%1000 == 0)` could easily miss the the counter clicking over. They should more likely to something like `(cycles-lcycles >=0)` where `lcycles` is the count of cycles last time the monitor resumed processing. That ensures the monitor will typically be given a lock that it might (for practical purposes) almost never catch. In that example the thread is waiting for both (a) an opportunity to acquire the lock and (b) some transient condition. There's a risk that it's rare for both to occur at once and that thread may be livelocked or practically livelocked and making insufficient progress, In short, make sure threads resume when a condition has passed not when a condition is exactly so. You can introduce a strict queue to give threads turns. Just don't assume that is what yo have unless the documentation makes clear promises about fairness.
This is why a `condition_variable` is always used in combination with a lock (a `mutex`). Both the producer and the consumer logic is performed while holding the *same* lock, so that any shared data (e.g. a queue, some condition flag, etc.) can be accessed by only one thread at a time. Notifying a `condition_variable` happens while releasing the lock, and the waiting thread acquires the lock while waking up. So in effect, the lock is *transferred* from the producer to the consumer thread, and it is impossible to set the condition and unset it before a consumer had a chance to see it, *iff* at least one other thread is already waiting on it. In other words, [`condition_variable::notify_one`](https://en.cppreference.com/w/cpp/thread/condition_variable/notify_one) is *guaranteed* to wake up at least one waiting thread ([`condition_variable::wait`](https://en.cppreference.com/w/cpp/thread/condition_variable/wait)), if there is any. Of course, the producer should release the lock before or after invoking `notify_one` so as to let the waiting thread proceed.
62,675,477
This is more theoretical question. I've seen couple examples with `std::condition_variable` and it seems that it gets threads to sleep until a condition will be satisfied. It is some kind of a flag on races and makes sense. But it seems that if a variable is always changed then a thread may be woken up for checking condition and fall asleep again, because it does not keep up to catch the situation when predicate is true and so I may put the thread in "coma". So, is `std::condition_variable` unsafe when I use it in the such situation when I always change it?
2020/07/01
[ "https://Stackoverflow.com/questions/62675477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8741818/" ]
This is why you also have to use a mutex whenever you use a condition variable. I think you are missing the fact that the thread starts waiting and locks the mutex *at the same time*. The thread which changes the condition must also lock the mutex when it changes the condition. Thread 1: ``` Lock mutex Check condition (it's false) Unlock mutex and start waiting (these happen at the exact same time) Finish waiting and lock mutex Check condition (it's true) Unlock mutex ``` Thread 2: ``` Lock mutex Change condition Unlock mutex Notify condition variable ``` There is no possibility that thread 1 will wait when the condition is already true. Because the condition can't change when the mutex is locked, and the mutex doesn't get unlocked until thread 1 is already waiting. It's possible that after thread 2 releases the mutex, but before thread 2 notifies it, thread 1 might wake up spuriously, see that the condition is true, do some other stuff, and start waiting again. Thread 2 would then notify the condition variable, and thread 1 would see it as a spurious wakeup.
This is why a `condition_variable` is always used in combination with a lock (a `mutex`). Both the producer and the consumer logic is performed while holding the *same* lock, so that any shared data (e.g. a queue, some condition flag, etc.) can be accessed by only one thread at a time. Notifying a `condition_variable` happens while releasing the lock, and the waiting thread acquires the lock while waking up. So in effect, the lock is *transferred* from the producer to the consumer thread, and it is impossible to set the condition and unset it before a consumer had a chance to see it, *iff* at least one other thread is already waiting on it. In other words, [`condition_variable::notify_one`](https://en.cppreference.com/w/cpp/thread/condition_variable/notify_one) is *guaranteed* to wake up at least one waiting thread ([`condition_variable::wait`](https://en.cppreference.com/w/cpp/thread/condition_variable/wait)), if there is any. Of course, the producer should release the lock before or after invoking `notify_one` so as to let the waiting thread proceed.
62,675,483
I am trying to convert my data frame so that the columns are unique lat values and rows are unique lon values with the values being the distance in r. My original data frame is similar to this: ``` df <- data.frame( lat =c(0,0,0,25,25,25,30,30,30), lon =c(1,5,10,1,5,10,1,5,10), distance = c(20, 22, 25, 10, 12, 15, 5, 7, 9)) df ``` but i want to convert it into a form like this (although column names are not necessary) ``` final_df <- data.frame(lat0 = c(20,22,25), lat25 = c(10,12,15), lat30= c(5,7,9)) final_df ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13573239/" ]
You appear to be talking about some kind of 'livelock'. A deadlock is a situation where one or more threads is making no progress waiting on the release of a lock. Livelock is similar but the state of the process is constantly changing but still none is progressing. We can also talk about 'practically livelocked' where a thread gets insufficient opportunity to make adequate progress. To an observer livelock and practical livelock often look very like genuine deadlock. You should design your program logic to avoid livelock. It can be non-trivial. For example many forms of lock aren't 'fair'. That is when a number of threads are waiting on a lock that is released the lock that requested it first isn't guaranteed to receive it next. In fact many operating system locks are intrinsically unfair for example giving locks to the threads that are easier to wake up (e.g. loaded to a core and suspended) rather than harder (unloaded from a core and require reloading to resume execution). The question doesn't give many details so it's difficult diagnose the situation. If some particular thread (or class of thread) requires priority you might introduce a flag that tells low priority threads to not acquire the lock if a priority thread is waiting (and can run) such as `(c && !(p && priority_waiting))` where `c` is the logical condition for the low priority thread and `p` is the logical condition for the priority thread. You should certainly avoid logic where a thread waits on a potentially transient condition. Suppose you have some monitor thread that produces output ever 1000 cycles. A waiting condition like `(cycles%1000 == 0)` could easily miss the the counter clicking over. They should more likely to something like `(cycles-lcycles >=0)` where `lcycles` is the count of cycles last time the monitor resumed processing. That ensures the monitor will typically be given a lock that it might (for practical purposes) almost never catch. In that example the thread is waiting for both (a) an opportunity to acquire the lock and (b) some transient condition. There's a risk that it's rare for both to occur at once and that thread may be livelocked or practically livelocked and making insufficient progress, In short, make sure threads resume when a condition has passed not when a condition is exactly so. You can introduce a strict queue to give threads turns. Just don't assume that is what yo have unless the documentation makes clear promises about fairness.
This is why a `condition_variable` is always used in combination with a lock (a `mutex`). Both the producer and the consumer logic is performed while holding the *same* lock, so that any shared data (e.g. a queue, some condition flag, etc.) can be accessed by only one thread at a time. Notifying a `condition_variable` happens while releasing the lock, and the waiting thread acquires the lock while waking up. So in effect, the lock is *transferred* from the producer to the consumer thread, and it is impossible to set the condition and unset it before a consumer had a chance to see it, *iff* at least one other thread is already waiting on it. In other words, [`condition_variable::notify_one`](https://en.cppreference.com/w/cpp/thread/condition_variable/notify_one) is *guaranteed* to wake up at least one waiting thread ([`condition_variable::wait`](https://en.cppreference.com/w/cpp/thread/condition_variable/wait)), if there is any. Of course, the producer should release the lock before or after invoking `notify_one` so as to let the waiting thread proceed.
62,675,483
I am trying to convert my data frame so that the columns are unique lat values and rows are unique lon values with the values being the distance in r. My original data frame is similar to this: ``` df <- data.frame( lat =c(0,0,0,25,25,25,30,30,30), lon =c(1,5,10,1,5,10,1,5,10), distance = c(20, 22, 25, 10, 12, 15, 5, 7, 9)) df ``` but i want to convert it into a form like this (although column names are not necessary) ``` final_df <- data.frame(lat0 = c(20,22,25), lat25 = c(10,12,15), lat30= c(5,7,9)) final_df ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13573239/" ]
This is why you also have to use a mutex whenever you use a condition variable. I think you are missing the fact that the thread starts waiting and locks the mutex *at the same time*. The thread which changes the condition must also lock the mutex when it changes the condition. Thread 1: ``` Lock mutex Check condition (it's false) Unlock mutex and start waiting (these happen at the exact same time) Finish waiting and lock mutex Check condition (it's true) Unlock mutex ``` Thread 2: ``` Lock mutex Change condition Unlock mutex Notify condition variable ``` There is no possibility that thread 1 will wait when the condition is already true. Because the condition can't change when the mutex is locked, and the mutex doesn't get unlocked until thread 1 is already waiting. It's possible that after thread 2 releases the mutex, but before thread 2 notifies it, thread 1 might wake up spuriously, see that the condition is true, do some other stuff, and start waiting again. Thread 2 would then notify the condition variable, and thread 1 would see it as a spurious wakeup.
This is why a `condition_variable` is always used in combination with a lock (a `mutex`). Both the producer and the consumer logic is performed while holding the *same* lock, so that any shared data (e.g. a queue, some condition flag, etc.) can be accessed by only one thread at a time. Notifying a `condition_variable` happens while releasing the lock, and the waiting thread acquires the lock while waking up. So in effect, the lock is *transferred* from the producer to the consumer thread, and it is impossible to set the condition and unset it before a consumer had a chance to see it, *iff* at least one other thread is already waiting on it. In other words, [`condition_variable::notify_one`](https://en.cppreference.com/w/cpp/thread/condition_variable/notify_one) is *guaranteed* to wake up at least one waiting thread ([`condition_variable::wait`](https://en.cppreference.com/w/cpp/thread/condition_variable/wait)), if there is any. Of course, the producer should release the lock before or after invoking `notify_one` so as to let the waiting thread proceed.
62,675,496
I am trying to assign a specific agent on my agent pool but I don't know how to do it. Does anyone know it? I tried with this but doesn't work: ``` - stage: Deploy pool: alm-aws-pool agent.name: deploy-05-agent1 ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12338777/" ]
The pool name needs to add to the `name` field, then you could add `demands`. You may try the following Yaml Code: ``` stages: - stage: Deploy pool: name: AgentPoolName(e.g. alm-aws-pool) demands: - agent.name -equals Agentname (e.g. deploy-05-agent1) jobs: - job: BuildJob steps: - script: echo Building! ``` Please check if it could work. Hope this helps.
Use demands <https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml> ``` - stage: Deploy pool: alm-aws-pool demands: - agent.name -equals deploy-05-agent1 ```
62,675,510
I have a `pyspark` data frame df which is holding large no of rows.Once of the columns is lat-long. I want to find the state name from the lat-long.I am using the below code ``` import reverse_geocoder as rg new_df = df_new2.toPandas() list_long_lat = a["lat_long"].tolist() result = rg.search(list_long_lat) state_name=[] for each_entry in result: state_name.append(each_entry["admin2"]) state_values = pd.Series(state_name) a.insert(loc=0, column='State_name', value=state_values) ``` first of all when converting to pandas I am getting out of memory issue.Is there any way to efficiently find the state name with out even converting from pyspark data frame to pandas data frame considering the no of rows in input data frame is huge:1000000 Million
2020/07/01
[ "https://Stackoverflow.com/questions/62675510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13126794/" ]
Can you try by creating a udf ``` import reverse_geocoder as rg import pyspark.sql.functions as f map_state = f.udf(lambda x : rg.search(x)[0]['admin2']) data.withColumn('State',map_state(f.col('lat_long'))).show() ``` The only drawback here is udf are not very fast also this will hit the api multiple times.
Didn't do much pyspark, but pyspark's syntax is somewhat similar to pandas. Maybe give the following snippet a try. ```py search_state_udf = udf(lambda x: rg.search(x), StringType()) df.withColumn("state", search_state_udf(df.lat_long)) ``` When the dataset is more than 1M records, looping the whole dataset is often not performant, you may want to have a look at `apply` to make it efficient.
62,675,515
I have an array of IP addresses, sample array below: ``` $arr = "22.22.22.22", "33.33.33.33", "44.44.44.44" ``` I am trying to insert quotes `"` at the start & end of each IP and convert the array to a string value. I have tried: ``` $arr | ForEach-Object { $newArr += $_.Insert(0,'" ') } ``` Output: ``` $newArr " 22.22.22.22" 33.33.33.33" 44.44.44.44 ``` Desired string output: ``` "22.22.22.22" "33.33.33.33" "44.44.44.44" ``` Is this possible using powershell?
2020/07/01
[ "https://Stackoverflow.com/questions/62675515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9924807/" ]
Here's one idea. First, convert the array to a string with the `" "` separator, then prepend and append the `"` character. ``` $newStr = '"' + [system.String]::Join('" "',$arr) + '"' # "22.22.22.22" "33.33.33.33" "44.44.44.44" ```
Or make use of the `-f` Format operator. Something like ``` ($arr | ForEach-Object { '"{0}"' -f $_ }) -join ' ' ``` or shorter: ``` '"{0}"' -f ($arr -join '" "') ```
62,675,547
I am trying to setup a multi branch pipeline between Github and Jenkins. I am working on a spring boot project and I have setup my local jdk and maven within Jenkins. I am running the pipeline using jenkinsfile, the pipeline code is as below: ``` pipeline { agent any stages { stage('Build') { steps { echo 'Building..' } } stage('Test') { steps { echo 'Testing..' } } stage('Deploy') { steps { echo 'Deploying....' } } } } ``` Jenkins is able to scan my repo successfully, however on running the pipeline I am getting below error which I am clueless about. Could you please help me in breaking this barrier I am stuck at this. ``` 16:35:38 Connecting to https://api.github.com using shred22/****** (Git Hub Credentials) Obtained Jenkinsfile from ab4673fa1cc076e4ce773fbc2be9c3756aac0bc8 Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] End of Pipeline GitHub has been notified of this commit’s build result hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: jar:file:/var/lib/jenkins/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy: 43: unable to resolve class javax.annotation.Nonnull @ line 43, column 1. import javax.annotation.Nonnull ^ jar:file:/var/lib/jenkins/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy: 42: unable to resolve class javax.annotation.CheckForNull @ line 42, column 1. import javax.annotation.CheckForNull ^ jar:file:/var/lib/jenkins/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy: 455: unable to resolve class javax.annotation.CheckForNull , unable to find class for annotation @ line 455, column 30. def withCredentialsBlock(@CheckForNull Environment environment, Closure body) { ^ jar:file:/var/lib/jenkins/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy: 493: unable to resolve class javax.annotation.Nonnull , unable to find class for annotation @ line 493, column 13. @Nonnull Map<String, CredentialWrapper> credentials) { ^ 4 errors at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:958) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:605) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:554) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:254) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:250) at groovy.lang.GroovyClassLoader.recompile(GroovyClassLoader.java:766) at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:718) at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:787) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:575) at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell$TimingLoader.loadClass(CpsGroovyShell.java:170) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:575) at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:677) at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:787) at groovy.lang.GroovyClassLoader.loadClass(GroovyClassLoader.java:775) at org.jenkinsci.plugins.pipeline.modeldefinition.ModelStepLoader.getValue(ModelStepLoader.java:60) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:113) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:157) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:142) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) at WorkflowScript.run(WorkflowScript:1) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:400) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:96) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:312) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:276) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Finished: FAILURE ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2580609/" ]
Please check the upstream issue. I worked around by downgrading **Pipeline: Groovy package** to v2.80 from v2.81. (However, the PR is ready in upstream so I believe v2.82 will be released soon.) * <https://issues.jenkins-ci.org/browse/JENKINS-62988> * <https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/397>
I have worked around the problem by removing the two imports and the annotations from the coded in ModelInterpreter.groovy in the jar file and replacing the jar file. That makes it work, since the annotations are optional it will probably not cause problems.
62,675,602
I have 4 compete address in column and CITY PICODE in different column of same datatframe, below expression returns correct result for CITY but not for Pincode which is 6 digit number. ConAddress is the concatenation of all 5 client address columns ```py import pandas as pd import numpy as np df = pd.read_excel('Rural_Data.xlsx') df['ConAddress'] = df['CLIENT_ADDRESS_1'].astype(str)+' '+df['CLIENT_ADDRESS_2'].astype(str)+' '+df['CLIENT_ADDRESS_3'].astype(str)+' '+df['CLIENT_ADDRESS_4'].astype(str)+' '+df['CLIENT_ADDRESS_5'].astype(str) # filling na as if blank cell will be there in the address columns mentioned above it will find the match df.update(df[['VILLAGENAME','TALUKANAME','DISTRICTNAME','PINCODENEW']].fillna('--')) df_given_columns =df[['VILLAGENAME','TALUKANAME','DISTRICTNAME','PINCODENEW']] print(df['PINCODENEW'].dtype) for gcol in list(df_given_columns.columns.values): result_column_name= str(gcol)[:3] df[gcol]=df[gcol].astype(str) # df[result_column_name] = df.apply(lambda x: x[gcol] in x['ConAddress'], axis=1).astype(int) df[result_column_name] = (df.apply(lambda x: str(x[gcol]) in x['ConAddress'], axis=1)).astype(int) df_result_columns = df[['VIL','TAL','DIS','PIN']] print(df_result_columns['PIN'].head()) df.to_csv('outputs.csv') ``` Sample Data <https://drive.google.com/file/d/1lusfgHHX_qmqYuaw0xexDF2hovkcU8py/view?usp=sharing> ``` ConAddress DISTRICTNAME PINCODENEW AP MOHI MANTAL MANDIST SATARA 415508 MAHA SATARA 415508 AP BHAGAT MALA VADIYERAYBAG SATARA SATARA 415305 SATARA 415305 AT POST ,NHAVI,TAL-INDAPUR PUNE MAHARASHTRA PUNE AT POST ,NHAVI,TAL-INDAPUR PUNE MAHARASHTRA Delhi ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10005838/" ]
Had a look on your data , column has that green symbol comes in excel for format changing. Similar issue I had in searching mobile number change below lines just before your for loop it hope it will work fine. ``` df['PINCODENEW'] = df['PINCODENEW'].astype(int, errors='ignore') df['PINCODENEW'] = df['PINCODENEW'].astype(str).replace('\.0','', regex=True) ```
Convert value to string by `str`: ``` df['Result'] = (df.apply(lambda x: str(x['PINCODENEW']) in x['ConAddress'], axis=1) .astype(int)) ```
62,675,616
I would like to get help pages for gcloud commands without the less prompt. For example ``` gcloud help ``` Generates a help page with a `:` prompt for navigating the output. This is great for interactive use, but sometimes I am in a tool and I will the tool to navigate the output. My tool gets hung up on the prompt. I've tried * -q | --quiet option ``` gcloud -q help ``` * --format=text option ``` gcloud --format=text ``` * --format=none option ``` gcloud --format=none ``` None of these techniques remove the interactive prompt. Also, if the above options are not for removing the less pager, what/how are they used?
2020/07/01
[ "https://Stackoverflow.com/questions/62675616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1008596/" ]
I can do this, but it seems like one of the options would preclude this necessity. ``` gcloud --help | cat ```
Alternate solution that will save you a sub-process ``` PAGER=cat gcloud --help ``` or ``` export PAGER=cat gcloud --help ```
62,675,629
I have function which receives 2 strings the first is a date "y-M-d" and the second a time "HH:mm". I then combine them using the following code. ```swift let dateFormatter = DateFormatter() dateFormatter.dateFormat = "y-M-d" let date = dateFormatter.date(from: dateStr)! dateFormatter.dateFormat = "HH:mm" let time = dateFormatter.date(from: timeStr)! let calendar = Calendar.init(identifier: .iso8601) let components = NSDateComponents() components.day = calendar.component(.day, from: date) //split from date above components.month = calendar.component(.month, from: date) components.year = calendar.component(.year, from: date) components.hour = calendar.component(.hour, from: time) //split from time above components.minute = calendar.component(.minute, from: time) let newDate = calendar.date(from: components as DateComponents) ``` The code all works fine and is doing what I want it to. However, I was wondering if anyone can suggest a slicker way of doing it, using less lines of code?
2020/07/01
[ "https://Stackoverflow.com/questions/62675629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5561710/" ]
I can do this, but it seems like one of the options would preclude this necessity. ``` gcloud --help | cat ```
Alternate solution that will save you a sub-process ``` PAGER=cat gcloud --help ``` or ``` export PAGER=cat gcloud --help ```
62,675,644
Is there an easy way to write a test for a column being positive in dbt? `accepted_values` doesn't seem to work for continuous vatiables. I know you can write queries in `./tests` but it looks like an overkill for such a simple thing.
2020/07/01
[ "https://Stackoverflow.com/questions/62675644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8248194/" ]
You could use [`dbt_utils.expression_is_true`](https://github.com/fishtown-analytics/dbt-utils#expression_is_true-source) ``` version: 2 models: - name: model_name tests: - dbt_utils.expression_is_true: expression: "col_a > 0" ```
I think the dbt\_utils suggestion is good, the only reasonable alternative I can think of is writing a custom schema test: <https://docs.getdbt.com/docs/guides/writing-custom-schema-tests/> But why bother when you can just use expression\_is\_true @jake
62,675,653
I'm a python beginner and I need help with a problem. I need a code where I insert a folder that contains jpeg files and my output is the address of the location of all files contained in the folder. Thank you!
2020/07/01
[ "https://Stackoverflow.com/questions/62675653", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846738/" ]
``` import os os.system('cd //add your folder') print(os.system('ls')) ```
``` import os def listJpegs(folder): jpegFiles = [] for file in os.listdir(folder): filename, file_extension = os.path.splitext(file) file_extension = file_extension.lower() if file_extension == ".jpeg" or file_extension == ".jpg": jpegFiles.append(os.path.join(folder, file)) return jpegFiles ```
62,675,653
I'm a python beginner and I need help with a problem. I need a code where I insert a folder that contains jpeg files and my output is the address of the location of all files contained in the folder. Thank you!
2020/07/01
[ "https://Stackoverflow.com/questions/62675653", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846738/" ]
``` import os os.system('cd //add your folder') print(os.system('ls')) ```
Thank you guys. I got it using the code below: ``` import os path="folder example" for root, dirs, files in os.walk(path): print(dirs) for file in files: print(os.path.join(root,file) ```
62,675,656
My goal is to work out how many accounts there have been on my website at specific times. I allow accounts to be cancelled at anytime, but if they were cancelled after the month I'm looking at then I would still like them to appear as they were active at that snapshot in time. My `accounts` table which looks like: ``` -------------------------------------------------- id | int signUpDate | varchar cancellationTriggeredDate | datetime (NULLABLE) -------------------------------------------------- ``` I wrote a select statement to accomplish this goal which looks like: ``` SELECT COUNT(*) AS January_2020 FROM Accounts WHERE STR_TO_DATE(signUpDate, '%d/%m/%Y') <= STR_TO_DATE('31/01/2020', '%d/%m/%Y') AND cancellationTriggeredDate <= '2020-01-31 00:00:00' ``` The expected results would be 3, this is how many accounts I had in January and have not been cancelled after January. The actual results is 0. I believe this is because not all of my accounts have a cancellation date set, but I'm not sure how to handle this. To make it easier to get help, I have created a SQL Fiddle including sample data and schema. <http://sqlfiddle.com/#!9/64f3e3>
2020/07/01
[ "https://Stackoverflow.com/questions/62675656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846583/" ]
``` import os os.system('cd //add your folder') print(os.system('ls')) ```
``` import os def listJpegs(folder): jpegFiles = [] for file in os.listdir(folder): filename, file_extension = os.path.splitext(file) file_extension = file_extension.lower() if file_extension == ".jpeg" or file_extension == ".jpg": jpegFiles.append(os.path.join(folder, file)) return jpegFiles ```
62,675,656
My goal is to work out how many accounts there have been on my website at specific times. I allow accounts to be cancelled at anytime, but if they were cancelled after the month I'm looking at then I would still like them to appear as they were active at that snapshot in time. My `accounts` table which looks like: ``` -------------------------------------------------- id | int signUpDate | varchar cancellationTriggeredDate | datetime (NULLABLE) -------------------------------------------------- ``` I wrote a select statement to accomplish this goal which looks like: ``` SELECT COUNT(*) AS January_2020 FROM Accounts WHERE STR_TO_DATE(signUpDate, '%d/%m/%Y') <= STR_TO_DATE('31/01/2020', '%d/%m/%Y') AND cancellationTriggeredDate <= '2020-01-31 00:00:00' ``` The expected results would be 3, this is how many accounts I had in January and have not been cancelled after January. The actual results is 0. I believe this is because not all of my accounts have a cancellation date set, but I'm not sure how to handle this. To make it easier to get help, I have created a SQL Fiddle including sample data and schema. <http://sqlfiddle.com/#!9/64f3e3>
2020/07/01
[ "https://Stackoverflow.com/questions/62675656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846583/" ]
``` import os os.system('cd //add your folder') print(os.system('ls')) ```
Thank you guys. I got it using the code below: ``` import os path="folder example" for root, dirs, files in os.walk(path): print(dirs) for file in files: print(os.path.join(root,file) ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
this is a your call : ``` InAppHandler().purchaseSubscription(productId: "anyStringData") { (boolCheck, result1, result2) in print(result1) } ``` And this is your defination : ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { completionResult(true,"Data1", "Data2") } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
this is a your call : ``` InAppHandler().purchaseSubscription(productId: "anyStringData") { (boolCheck, result1, result2) in print(result1) } ``` And this is your defination : ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { completionResult(true,"Data1", "Data2") } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
The completion handler requires three input parameters. You can ignore the parameters but must be explicit about it. Furthermore the signature indicates this is a member function, while you seem to be calling the function on the type (class, struct, enum...). So the correct way would be: ``` let inAppHandler = InAppHandler() inAppHandler.purchaseSubscription(productId: "test") { _, _, _ in processPurchase() } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
The completion handler requires three input parameters. You can ignore the parameters but must be explicit about it. Furthermore the signature indicates this is a member function, while you seem to be calling the function on the type (class, struct, enum...). So the correct way would be: ``` let inAppHandler = InAppHandler() inAppHandler.purchaseSubscription(productId: "test") { _, _, _ in processPurchase() } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
This is the code you need to use: ``` InAppHandler.purchaseSubscription(productId: "test") { (boolVal1, stringVal, boolVal2) in processPurchase() } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
This is the code you need to use: ``` InAppHandler.purchaseSubscription(productId: "test") { (boolVal1, stringVal, boolVal2) in processPurchase() } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
Your call should be like this ``` InAppHandler.purchaseSubscription(productId: "YOUR_PRODUCT_ID_STRING") { (boolValue, firstString, secondString) in } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
Your call should be like this ``` InAppHandler.purchaseSubscription(productId: "YOUR_PRODUCT_ID_STRING") { (boolValue, firstString, secondString) in } ```
62,675,666
How do I call a function with the following signature? ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { ``` is it ``` InAppHandler.purchaseSubscription("test") { processPurchase() } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/851097/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
this is a your call : ``` InAppHandler().purchaseSubscription(productId: "anyStringData") { (boolCheck, result1, result2) in print(result1) } ``` And this is your defination : ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { completionResult(true,"Data1", "Data2") } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
this is a your call : ``` InAppHandler().purchaseSubscription(productId: "anyStringData") { (boolCheck, result1, result2) in print(result1) } ``` And this is your defination : ``` func purchaseSubscription(productId: String, completion: @escaping (Bool, String, String) -> Void) { completionResult(true,"Data1", "Data2") } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
The completion handler requires three input parameters. You can ignore the parameters but must be explicit about it. Furthermore the signature indicates this is a member function, while you seem to be calling the function on the type (class, struct, enum...). So the correct way would be: ``` let inAppHandler = InAppHandler() inAppHandler.purchaseSubscription(productId: "test") { _, _, _ in processPurchase() } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
The completion handler requires three input parameters. You can ignore the parameters but must be explicit about it. Furthermore the signature indicates this is a member function, while you seem to be calling the function on the type (class, struct, enum...). So the correct way would be: ``` let inAppHandler = InAppHandler() inAppHandler.purchaseSubscription(productId: "test") { _, _, _ in processPurchase() } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
This is the code you need to use: ``` InAppHandler.purchaseSubscription(productId: "test") { (boolVal1, stringVal, boolVal2) in processPurchase() } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
This is the code you need to use: ``` InAppHandler.purchaseSubscription(productId: "test") { (boolVal1, stringVal, boolVal2) in processPurchase() } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
Your call should be like this ``` InAppHandler.purchaseSubscription(productId: "YOUR_PRODUCT_ID_STRING") { (boolValue, firstString, secondString) in } ```
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
Your call should be like this ``` InAppHandler.purchaseSubscription(productId: "YOUR_PRODUCT_ID_STRING") { (boolValue, firstString, secondString) in } ```
62,675,670
**Input Explained:** I have two dataframe `df1` and `df2`, which holds columns as mentioned below. `df1` ``` Description Col1 Col2 AAA 1.2 2.5 BBB 1.3 2.0 CCC 1.1 2.3 ``` `df2` ``` Description Col1 Col2 AAA 1.2 1.3 BBB 1.3 2.0 ``` **Scenario:** Have to compare `df1['Description']` and `df2['Description']`, when both equals then have to compare `df1['Col1']` with `df2['Col1']` and `df1['Col2']` with `df2['Col2']` and produce result as expected below. **Expected Output:** ``` Description Col1 Col2 Col1_Result Col2_Result AAA 1.2 2.5 Pass Fail BBB 1.3 2.0 Pass Pass CCC 1.1 2.3 Not found in df2 Not found in df2 ``` **Tried Code:** Have tried out the below mentioned codeline for above mentioned scenario but doesn't works. Throughs error *"ValueError: Can only compare identically-labeled Series objects"* ``` df1['Col1_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col1'] == df2['Col1'], 'Pass', 'Fail'),'Not found in df2') df1['Col2_Result'] = np.where(df1['Description']== df2['Description'],np.where(df1['Col2'] == df2['Col2'], 'Pass', 'Fail'),'Not found in df2') ``` Thanks in Advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62675670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7818560/" ]
There can be multiple forms to call the method. **1.** Define the parameters when calling the `closure`, u.e. ``` InAppHandler.purchaseSubscription(productId: "test") {(x, y, z) in print(x, y, z) processPurchase() } ``` **2.** You can use the shorthand form (`$0, $1` etc.) for the parameters in the `closure` while calling it, i.e. ``` InAppHandler.purchaseSubscription(productId: "test") { print($0, $1, $2) processPurchase() } ``` The above 2 are same. Just that in first one you're giving the parameter names and in the second one you're using the shorthand for those parameters. **3.** In case you're not using any parameters that you're getting in the `closure`, mark them with underscore(`_`) like so, ``` InAppHandler.purchaseSubscription(productId: "test") {(_, _, _) in processPurchase() } ``` You can use any of the other forms depending on your requirement.
Since you are not using any of the closure params it is recommended to use `_` ``` purchaseSubscription(productId: "Id") { _, _, _ in processPurchase() } ``` It is good to check the closure param before you do processPurchase() ``` purchaseSubscription(productId: "Id") { success, _, _ in if success { processPurchase() } } ```
62,675,699
I tried to understand difference between sinon library's fake, spy, stub and mock but not able to understand it clearly. Can anybody help me to understand about it?
2020/07/01
[ "https://Stackoverflow.com/questions/62675699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5443319/" ]
Just for understanding purpose call: **FuncInfoCollector** = is a Function that records arguments, return value, the value of this(context) and exception thrown (if any) for all of its calls. (this FuncInfoCollector is dummy name given by me, it is not present in SINON lib) --- `Fake` = FuncInfoCollector + can only create a **fake** function. To replace a function that **already exists** in the system under test you call `sinon.replace(target, fieldname, fake)`. You can wrap an existing function like this: ``` const org = foo.someMethod; sinon.fake((...args) => org(...args)); ``` A fake is **immutable**: once created, the behavior can't be changed. ``` var fakeFunc = sinon.fake.returns('foo'); fakeFunc(); // have call count of fakeFunc ( It will show 1 here) fakeFunc.callCount; ``` `Spy` = FuncInfoCollector + can **create** **new** function + It **can wrap** a function that already exists in the system under test. Spy is a good choice whenever the goal of a test is to **verify** something happened. ``` // Can be passed as a callback to async func to verify whether callback is called or not? const spyFunc = sinon.spy(); // Creates spy for ajax method of jQuery lib sinon.spy(jQuery, "ajax"); // will tell whether jQuery.ajax method called exactly once or not jQuery.ajax.calledOnce ``` `Stub` = spy + it **stubs** the original function ( can be used to change behaviour of original function). ``` var err = new Error('Ajax Error'); // So whenever jQuery.ajax method is called in a code it throws this Error sinon.stub(jQuery, "ajax").throws(err) // Here we are writing assert to check where jQuery.ajax is throwing an Error or not sinon.assert.threw(jQuery.ajax(), err); ``` `Mock` = Stub + pre-programmed expectations. ``` var mk = sinon.mock(jQuery) // Should be called atleast 2 time and almost 5 times mk.expects("ajax").atLeast(2).atMost(5); // It throws the following exception when called ( assert used above is not needed now ) mk.expects("ajax").throws(new Error('Ajax Error')) // will check whether all above expectations are met or not, hence assertions aren't needed mk.verify(); ``` Please have a look at this link also [sinon.replace vs sinon.stub just to replace return value?](https://stackoverflow.com/questions/54283946/sinon-replace-vs-sinon-stub-just-to-replace-return-value)
Just to add some more info to the otherwise good answer, we added the Fake API to Sinon because of shortcomings of the other original APIs (Stub and Spy). The fact that these APIs are chainable led to constant design issues and recurring user-problems and they were bloated to cater to quite unimportant use cases, which is why we opted for creating a new immutable API that would be simpler to use, less ambiguous and cheaper to maintain. It was built on top of the Spy and Stub Apis to let Fakes be somewhat recognizable and have explicit method for replacing props on objects (`sinon.replace(obj,'prop',fake)`). Fakes can essentially be used anywhere a stub or spy can be used and so I have not used the old APIs myself in 3-4 years, as code using the more limited fakes is simpler to understand for other people.
62,675,705
Im trying to get a grasp on how pointers work but I just cant put it togheter. I just want an impetus to know how to to proceed when I have the same problem again. This is a programm that should add two different times, but the read in, add and subtract must be in different functions. Within the functions everything works fine, but I just cant work out to send everything to the main(). I´ve tried implementing pointers in the arguments of the functions but when I do that everything gets really messy and nothing works, so im showing you a cleaner version of my code. I would be really thankful if someone could help me with my understanding :) PD: Thanks for being such an amazing community! Here is my code: ``` #include <stdio.h> #include <stdlib.h> void subtracttimes (int hr0,int min0, int sek0, int hr1, int min1, int sek1, char op); void addtimes ( int hr0, int min0, int sek0, int hr1, int min1, int sek1, char op ); char readin (); int main( int hr0, int min0, int sek0, int hr1, int min1, int sek1, char op) { char op1; op1 = readin (); switch(op1){ case '+': addtimes (hr0, min0, sek0, hr1, min1, sek1, op); break; case '-': subtracttimes (hr0, min0, sek0, hr1, min1, sek1, op); break; } return 0; } char readin () { int hr0, min0, sek0; int hr1, min1, sek1; char c, op, *merker; int ok; printf("Time Calculator\n"); printf("================\n\n"); printf("Start time (hh:mm:ss): "); ok = scanf("%d:%d:%d", &hr0, &min0, &sek0); while((c = getchar()) != '\n' && c != EOF) {} while ( ok != 3 || c != '\n' || hr0 > 23 || min0 > 59 || sek0 > 59) { printf("Wrong input!\n"); printf("Start time (hh:mm:ss): "); ok = scanf("%d:%d:%d", &hr0, &min0, &sek0); while((c = getchar()) != '\n' && c != EOF) {} } printf("Second time (hh:mm:ss): "); ok = scanf("%d:%d:%d", &hr1, &min1, &sek1); while((c = getchar()) != '\n' && c != EOF) {} while (ok != 3 || c != '\n' || hr1 > 23 || min1 > 59 || sek1 > 59) { printf("Wrong input!\n"); printf("Second Time (hh:mm:ss): "); ok = scanf("%d:%d:%d", &hr1, &min1, &sek1); while((c = getchar()) != '\n' && c != EOF) {} } printf("Type of operation (+/-): "); ok = scanf("%c", &op); while((c = getchar()) != '\n' && c != EOF) {} return op; } void addtimes ( int hr0, int min0, int sek0, int hr1, int min1, int sek1, char op ) { int resultHour; int resultMin; int resultSec; int resultDay = 0; resultHour = hr0 + hr1; resultMin = min0 + min1; resultSec = sek0 + sek1; if (resultSec > 59){ resultMin += 1; resultSec -= 60; } else if (resultMin > 59){ resultHour += 1; resultMin -= 60; } else if (resultHour > 23){ resultDay += 1; resultHour -= 24; } printf("%02d:%02d:%02d %c %02d:%02d:%02d: = ", hr0, min0, sek0, op, hr1, min1, sek1); if(resultDay==0) { printf("%02d:%02d:%02d", resultHour, resultMin, resultSec); } else if (resultDay>0) { printf("%d Tag %02d:%02d:%02d", resultDay, resultHour, resultMin, resultSec); } } void subtracttimes (int hr0, int min0, int sek0, int hr1, int min1, int sek1, char op) { int resultHour; int resultMin; int resultSec; int resultDay = 0; resultHour = hr0 - hr1; resultMin = min0 - min1; resultSec = sek0 - sek1; if (resultSec <= 0){ resultMin -= 1; resultSec = 60 - (sek1 -sek0); } else if (resultMin <= 0){ resultHour -= 1; resultMin = 60 - (min1 - min0); } else if (resultHour <= 0){ resultHour = 60 - (hr1 - hr0); } printf("%02d:%02d:%02d %c %02d:%02d:%02d: = ", hr0, min0, sek0, op, hr1, min1, sek1); printf("%02d:%02d:%02d", resultHour, resultMin, resultSec); } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8781618/" ]
You can add some output parameters as you want multiple outputs to be filled in the function. For example you can modify the function so it looks like: ``` void addtimes ( int hr0, int min0, int sek0, int hr1, int min1, int sek1, char op, int* hr_res, int* min_res, int* sek_res) { ... *hr_res = your_calculated_hr_res; *min_res = your_calculated_min_res; *sek_res = your_calculated_sek_res; ... } ``` In main, you have to add some variables to pass into the function to get the results you want. ``` int main() { int hr_res, min_res, sek_res; addtimes(..., &hr_res, &min_res, &sek_res); } ```
In C all parameter calls are by value. You can simulate call by reference by sending a pointer but in reality you still make a call by value as the pointer value is copied to the function. Normally you get values from a function by returning a value but when sending pointers you actually update the value that resides outside of the function, See [Does C even have "pass by reference"?](https://stackoverflow.com/questions/17168623/does-c-even-have-pass-by-reference)
62,675,712
I have two dataframes `df1` and `df2`. I am trying create a table from `dataframe df2` and insert it in the body of email content . My current code is only taking records matching to `Number`=`123` and creating tables in the body.While Subject is iterated correctly and the email is created correctly. What is that I making wrong in the iterations of the rows . I am attaching code below ``` df1 Subject Number Hello David Bill is due 123 Hello Adam Bill is due 456 Hello James Bill is due 789 df2 Number Month Amount 123 Jan 1oo 123 March 220 123 June 212 456 Jan 1oo 456 Feb 230 789 June 400 789 July 650 ``` **My code** ``` import os import boto3 from os.path import basename from email.mime.application import MIMEApplication from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.base import MIMEBase from email.utils import formatdate, COMMASPACE from tabulate import tabulate def create_message(send_from, send_to, cc_to, subject, plain_text_body): message = MIMEMultipart("alternative", None, [MIMEText(html,'html')]) message['From'] = send_from if str(send_to).__contains__(","): message['To'] = COMMASPACE.join(send_to) else: message['To'] =send_to message['Cc'] = cc_to message['Date'] = formatdate(localtime=True) message['Subject'] = subject message.attach(MIMEText(plain_text_body, 'plain')) return message def send_message(message): #print(message) client = boto3.client("ses",region_name='eu-west-1') response = client.send_raw_email(RawMessage = {"Data": message.as_string()}) html =''' <p>Dear receiver,</p> <p>Please find below the details</p> {table} <p>&nbsp;</p> <p>&nbsp;</p> <p>Thanks and best regards</p> <p>Rahul.</p>''' for i, row in df1.iterrows(): subject = row["Subject"] to_address ="abcd@yahoo.com" cc_list = "abel@yahoo.com" send_from="jack@yahoo.com" df3=df2[df2['Number']==row['Number']] headers= ['Number','Month','Amount'] html = html.format(table=tabulate(df3, headers, tablefmt="html")) message = create_message(send_from,to_address, cc_list, subject,html) send_message(message) ``` **Expected output** ``` Email1 Subject: Hello David Bill is due Body of email Please find below the details Number Month Amount 123 Jan 1oo 123 March 220 Email2 Subject: Hello Adam Bill is due Body of email Please find below the details Number Month Amount 456 Jan 1oo 456 Feb 230 ``` Any help Appreciated
2020/07/01
[ "https://Stackoverflow.com/questions/62675712", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8888469/" ]
In your code you have *html* variable - a multi-line string. But in your loop you assign to *html* another content, created by its formatting, thus overwriting the original content. This way, in the next turn of your loop, it will not have e.g. *{table}* - the placeholder for the table. Use **another** variable name for the reformatted *html* content.
I tried to copy your situation using two separate CSV files, but it seems to work witout any issue: ``` df1 = pd.read_csv('test1_iter.csv', delimiter=";") print(df1) df2 = pd.read_csv('test2_iter.csv', delimiter=";") print(df2) for i, row in df1.iterrows(): df_aux = df2[df2['Number']==row['Number']] print(df_aux) ``` And this is the result: ``` Subject Number 0 HelloDavidBillisdue 123 1 HelloAdamBillisdue 456 2 HelloJamesBillisdue 789 Number Month Amount 0 123 Jan 100 1 123 March 220 2 123 June 212 3 456 Jan 100 4 456 Feb 230 5 789 June 400 6 789 July 650 Number Month Amount 0 123 Jan 100 1 123 March 220 2 123 June 212 Number Month Amount 3 456 Jan 100 4 456 Feb 230 Number Month Amount 5 789 June 400 6 789 July 650 ``` How are you loading your data? The problem might be there.
62,675,714
I am working on a little project of mine and I am having a problem. I have been looking at other posts but they don't really help my problem. When I add a new page to my Home folder (in this case my new page is called index1 (I am planning to change that name)). <https://imgur.com/bxc61da> But when I add it and try to go to this page I get the following error: <https://imgur.com/unorQHr> This localhost page can’t be foundNo webpage was found for the web address: https://localhost:5001/Home/Index1 I am using an MVC for the first time, so I don't know what to do. Can anyone help me? Thanks in advance! If you need to see a file like a startup.cs file just comment it and I will edit the question.
2020/07/01
[ "https://Stackoverflow.com/questions/62675714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13554547/" ]
To access the Index View you need a method called Index1 in your Home controller which return a View(). First of all i recommend to try going to https://localhost:5001 which means it is in Index view of Home controller just to check if problem may be different from my approach. If it does work below example would solve the problem: Check your Home controller if it has a method like this: ``` public IActionResult Index1() { return View(); } ```
Have you tried without "Home" so "https://localhost:5001/Index1"?
62,675,714
I am working on a little project of mine and I am having a problem. I have been looking at other posts but they don't really help my problem. When I add a new page to my Home folder (in this case my new page is called index1 (I am planning to change that name)). <https://imgur.com/bxc61da> But when I add it and try to go to this page I get the following error: <https://imgur.com/unorQHr> This localhost page can’t be foundNo webpage was found for the web address: https://localhost:5001/Home/Index1 I am using an MVC for the first time, so I don't know what to do. Can anyone help me? Thanks in advance! If you need to see a file like a startup.cs file just comment it and I will edit the question.
2020/07/01
[ "https://Stackoverflow.com/questions/62675714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13554547/" ]
For me, it was because I accidentally deleted some csHtml files. Check your pages folder and make sure all the files are in order. If you accidentally deleted the default files, you can simply create a new project under thesame solution, cut and paste the pages folder into your original project and everything should work
Have you tried without "Home" so "https://localhost:5001/Index1"?
62,675,714
I am working on a little project of mine and I am having a problem. I have been looking at other posts but they don't really help my problem. When I add a new page to my Home folder (in this case my new page is called index1 (I am planning to change that name)). <https://imgur.com/bxc61da> But when I add it and try to go to this page I get the following error: <https://imgur.com/unorQHr> This localhost page can’t be foundNo webpage was found for the web address: https://localhost:5001/Home/Index1 I am using an MVC for the first time, so I don't know what to do. Can anyone help me? Thanks in advance! If you need to see a file like a startup.cs file just comment it and I will edit the question.
2020/07/01
[ "https://Stackoverflow.com/questions/62675714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13554547/" ]
To access the Index View you need a method called Index1 in your Home controller which return a View(). First of all i recommend to try going to https://localhost:5001 which means it is in Index view of Home controller just to check if problem may be different from my approach. If it does work below example would solve the problem: Check your Home controller if it has a method like this: ``` public IActionResult Index1() { return View(); } ```
For me, it was because I accidentally deleted some csHtml files. Check your pages folder and make sure all the files are in order. If you accidentally deleted the default files, you can simply create a new project under thesame solution, cut and paste the pages folder into your original project and everything should work
62,675,718
I am new to spark and scala. I have a json array struct as input, similar to the below schema. ``` root |-- entity: struct (nullable = true) | |-- email: string (nullable = true) | |-- primaryAddresses: array (nullable = true) | | |-- element: struct (containsNull = true) | | | |-- postalCode: string (nullable = true) | | | |-- streetAddress: struct (nullable = true) | | | | |-- line1: string (nullable = true) ``` I flattened the array struct to the below sample Dataframe ``` +-------------+--------------------------------------+--------------------------------------+ |entity.email |entity.primaryAddresses[0].postalCode |entity.primaryAddresses[1].postalCode |.... +-------------+--------------------------------------+--------------------------------------+ |a@b.com | | | |a@b.com | |12345 | |a@b.com |12345 | | |a@b.com |0 |0 | +-------------+--------------------------------------+--------------------------------------+ ``` My end goal is to calculate presence/absence/zero counts for each of the columns for data quality metrics.But before I calculate the data quality metrics I am looking for an approach to derive one new column for each of the array column elements as below such that * if all values of particular array element is empty, then the derived column is empty for that element * if at least one value is present for an array element, the element presence is considered 1 * if all values of an array element is zero the I mark the element as zero (I calibrate this as presence =1 and zero =1 when I calculate data quality later) Below is a sample intermediate dataframe that I am trying to achieve with a column derived for each of array elements. The original array elements are dropped. ``` +-------------+--------------------------------------+ |entity.email |entity.primaryAddresses.postalCode |..... +-------------+--------------------------------------+ |a@b.com | | |a@b.com |1 | |a@b.com |1 | |a@b.com |0 | +-------------+--------------------------------------+ ``` The input json records elements are dynamic and can change. To derive columns for array element I build a scala map with a key as column name without array index (example:entity.primaryAddresses.postalCode) and value as list of array elements to run rules on for the specific key. I am looking for an approach to achieve the above intermediate data frame. One concern is that for certain input files after I flatten the Dataframe , the dataframe column count exceeds 70k+. And since the record count is expected to be in millions I am wondering if instead of flattening the json if I should explode each of elements for better performance. Appreciate any ideas. Thank you.
2020/07/01
[ "https://Stackoverflow.com/questions/62675718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3351492/" ]
Created helper function & You can directly call `df.explodeColumns` on DataFrame. Below code will flatten multi level array & struct type columns. Use below function to extract columns & then apply your transformations on that. ``` scala> df.printSchema root |-- entity: struct (nullable = false) | |-- email: string (nullable = false) | |-- primaryAddresses: array (nullable = false) | | |-- element: struct (containsNull = false) | | | |-- postalCode: string (nullable = false) | | | |-- streetAddress: struct (nullable = false) | | | | |-- line1: string (nullable = false) ``` ``` import org.apache.spark.sql.{DataFrame, SparkSession} import org.apache.spark.sql.functions._ import org.apache.spark.sql.types._ import scala.annotation.tailrec import scala.util.Try implicit class DFHelpers(df: DataFrame) { def columns = { val dfColumns = df.columns.map(_.toLowerCase) df.schema.fields.flatMap { data => data match { case column if column.dataType.isInstanceOf[StructType] => { column.dataType.asInstanceOf[StructType].fields.map { field => val columnName = column.name val fieldName = field.name col(s"${columnName}.${fieldName}").as(s"${columnName}_${fieldName}") }.toList } case column => List(col(s"${column.name}")) } } } def flatten: DataFrame = { val empty = df.schema.filter(_.dataType.isInstanceOf[StructType]).isEmpty empty match { case false => df.select(columns: _*).flatten case _ => df } } def explodeColumns = { @tailrec def columns(cdf: DataFrame):DataFrame = cdf.schema.fields.filter(_.dataType.typeName == "array") match { case c if !c.isEmpty => columns(c.foldLeft(cdf)((dfa,field) => { dfa.withColumn(field.name,explode_outer(col(s"${field.name}"))).flatten })) case _ => cdf } columns(df.flatten) } } ``` ``` scala> df.explodeColumns.printSchema root |-- entity_email: string (nullable = false) |-- entity_primaryAddresses_postalCode: string (nullable = true) |-- entity_primaryAddresses_streetAddress_line1: string (nullable = true) ```
You can leverage on a custom user define function that can help you do the data quality metrics. ``` val postalUdf = udf((postalCode0: Int, postalCode1: Int) => { //TODO implement you logic here }) ``` then use is to create a new dataframe column ``` df .withColumn("postcalCode", postalUdf(col("postalCode_0"), col("postalCode_1"))) .show() ```
62,675,720
groupBy(array, fn) > > Returns an object of the elements of `array` keyed by the result of `fn` > on each element in array. The value at each key will be an array of > the corresponding elements, in the order they appeared in initial > array. > > >
2020/07/01
[ "https://Stackoverflow.com/questions/62675720", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846799/" ]
Created helper function & You can directly call `df.explodeColumns` on DataFrame. Below code will flatten multi level array & struct type columns. Use below function to extract columns & then apply your transformations on that. ``` scala> df.printSchema root |-- entity: struct (nullable = false) | |-- email: string (nullable = false) | |-- primaryAddresses: array (nullable = false) | | |-- element: struct (containsNull = false) | | | |-- postalCode: string (nullable = false) | | | |-- streetAddress: struct (nullable = false) | | | | |-- line1: string (nullable = false) ``` ``` import org.apache.spark.sql.{DataFrame, SparkSession} import org.apache.spark.sql.functions._ import org.apache.spark.sql.types._ import scala.annotation.tailrec import scala.util.Try implicit class DFHelpers(df: DataFrame) { def columns = { val dfColumns = df.columns.map(_.toLowerCase) df.schema.fields.flatMap { data => data match { case column if column.dataType.isInstanceOf[StructType] => { column.dataType.asInstanceOf[StructType].fields.map { field => val columnName = column.name val fieldName = field.name col(s"${columnName}.${fieldName}").as(s"${columnName}_${fieldName}") }.toList } case column => List(col(s"${column.name}")) } } } def flatten: DataFrame = { val empty = df.schema.filter(_.dataType.isInstanceOf[StructType]).isEmpty empty match { case false => df.select(columns: _*).flatten case _ => df } } def explodeColumns = { @tailrec def columns(cdf: DataFrame):DataFrame = cdf.schema.fields.filter(_.dataType.typeName == "array") match { case c if !c.isEmpty => columns(c.foldLeft(cdf)((dfa,field) => { dfa.withColumn(field.name,explode_outer(col(s"${field.name}"))).flatten })) case _ => cdf } columns(df.flatten) } } ``` ``` scala> df.explodeColumns.printSchema root |-- entity_email: string (nullable = false) |-- entity_primaryAddresses_postalCode: string (nullable = true) |-- entity_primaryAddresses_streetAddress_line1: string (nullable = true) ```
You can leverage on a custom user define function that can help you do the data quality metrics. ``` val postalUdf = udf((postalCode0: Int, postalCode1: Int) => { //TODO implement you logic here }) ``` then use is to create a new dataframe column ``` df .withColumn("postcalCode", postalUdf(col("postalCode_0"), col("postalCode_1"))) .show() ```
62,675,727
I am having difficulties setting up a loop in Oracle. I have a table where values are stored for several days. Now I want to get the average of these values for each day. I was attempting to set up a loop like this: ``` DECLARE BEGIN For iDay in 01.03.20, 02.03.20, 03.03.20 LOOP SELECT avg(values) FROM table WHERE date = 'iDay' END LOOP; END ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10004828/" ]
You can simply get the average value using the following query: ``` SELECT DATE, AVG (values) FROM table WHERE DATE BETWEEN DATE '2020-03-01' AND DATE '2020-03-03'; ``` Or if you want to use the loop then use the query in `FOR` loop `IN` clause as follows: ``` SQL> DECLARE 2 BEGIN 3 FOR DATAS IN ( 4 SELECT DATE '2020-03-01' + LEVEL - 1 DT 5 FROM DUAL CONNECT BY 6 LEVEL <= DATE '2020-03-03' - DATE '2020-03-01' + 1 7 ) LOOP 8 DBMS_OUTPUT.PUT_LINE(DATAS.DT); 9 -- YOUR_CODE_HERE 10 END LOOP; 11 END; 12 / 01-MAR-20 02-MAR-20 03-MAR-20 PL/SQL procedure successfully completed. SQL> ```
One option would be using Dynamic Query within PL/SQL : ``` SQL> SET SERVEROUTPUT ON; SQL> DECLARE v_result NUMBER; BEGIN For iDay in 0..2 LOOP EXECUTE IMMEDIATE 'SELECT AVG(values) FROM mytable WHERE mydate = :i_Day ' INTO v_result; USING iDay + date'2020-03-01'; DBMS_OUTPUT.PUT_LINE( v_result ); END LOOP; END; / ```
62,675,727
I am having difficulties setting up a loop in Oracle. I have a table where values are stored for several days. Now I want to get the average of these values for each day. I was attempting to set up a loop like this: ``` DECLARE BEGIN For iDay in 01.03.20, 02.03.20, 03.03.20 LOOP SELECT avg(values) FROM table WHERE date = 'iDay' END LOOP; END ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10004828/" ]
You can simply get the average value using the following query: ``` SELECT DATE, AVG (values) FROM table WHERE DATE BETWEEN DATE '2020-03-01' AND DATE '2020-03-03'; ``` Or if you want to use the loop then use the query in `FOR` loop `IN` clause as follows: ``` SQL> DECLARE 2 BEGIN 3 FOR DATAS IN ( 4 SELECT DATE '2020-03-01' + LEVEL - 1 DT 5 FROM DUAL CONNECT BY 6 LEVEL <= DATE '2020-03-03' - DATE '2020-03-01' + 1 7 ) LOOP 8 DBMS_OUTPUT.PUT_LINE(DATAS.DT); 9 -- YOUR_CODE_HERE 10 END LOOP; 11 END; 12 / 01-MAR-20 02-MAR-20 03-MAR-20 PL/SQL procedure successfully completed. SQL> ```
Why not simply ``` SELECT "date", avg(values) FROM "table" WHERE "date" between DATE '2020-03-01' and DATE '2020-03-03' GROUP by "date"; ``` Note, `date` and `table` are reserved words, most likely the query will without quotes.
62,675,727
I am having difficulties setting up a loop in Oracle. I have a table where values are stored for several days. Now I want to get the average of these values for each day. I was attempting to set up a loop like this: ``` DECLARE BEGIN For iDay in 01.03.20, 02.03.20, 03.03.20 LOOP SELECT avg(values) FROM table WHERE date = 'iDay' END LOOP; END ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10004828/" ]
One option would be using Dynamic Query within PL/SQL : ``` SQL> SET SERVEROUTPUT ON; SQL> DECLARE v_result NUMBER; BEGIN For iDay in 0..2 LOOP EXECUTE IMMEDIATE 'SELECT AVG(values) FROM mytable WHERE mydate = :i_Day ' INTO v_result; USING iDay + date'2020-03-01'; DBMS_OUTPUT.PUT_LINE( v_result ); END LOOP; END; / ```
Why not simply ``` SELECT "date", avg(values) FROM "table" WHERE "date" between DATE '2020-03-01' and DATE '2020-03-03' GROUP by "date"; ``` Note, `date` and `table` are reserved words, most likely the query will without quotes.
62,675,738
Why when I compare those two I get `false` ``` const production = process.env.NODE_ENV === 'production'; ``` Even when I set `process.env.NODE_ENV` to production I still get `false` value. Why? Example: package.json SCRIPTS: ``` "scripts": { "start:prod": "set NODE_ENV=production && nodemon server.js" } ``` VS CODE ``` const production = process.env.NODE_ENV === 'production'; console.log(process.env.NODE_ENV); // production console.log(typeof process.env.NODE_ENV); // string console.log(typeof 'production'); // string console.log(production) // false ``` Why production returns false even though the values are exactly the same?
2020/07/01
[ "https://Stackoverflow.com/questions/62675738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9290485/" ]
To set node environment in powershell use below: $env:NODE\_ENV = 'production' drop the spaces before and after && ``` "scripts": { "start:prod": "set NODE_ENV=production&&nodemon server.js" } ```
You should do something like this ``` "scripts": { "start:prod": "NODE_ENV=production nodemon server.js" } ``` `set` command is used to set or unset values of shell options and positional parameters and not the environment variables
62,675,738
Why when I compare those two I get `false` ``` const production = process.env.NODE_ENV === 'production'; ``` Even when I set `process.env.NODE_ENV` to production I still get `false` value. Why? Example: package.json SCRIPTS: ``` "scripts": { "start:prod": "set NODE_ENV=production && nodemon server.js" } ``` VS CODE ``` const production = process.env.NODE_ENV === 'production'; console.log(process.env.NODE_ENV); // production console.log(typeof process.env.NODE_ENV); // string console.log(typeof 'production'); // string console.log(production) // false ``` Why production returns false even though the values are exactly the same?
2020/07/01
[ "https://Stackoverflow.com/questions/62675738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9290485/" ]
To set node environment in powershell use below: $env:NODE\_ENV = 'production' drop the spaces before and after && ``` "scripts": { "start:prod": "set NODE_ENV=production&&nodemon server.js" } ```
I met the similar issue and it took me a couple of hours to realize this. Your `package.json` has this script ``` "scripts": { "start:prod": "set NODE_ENV=production && nodemon server.js" } ``` However you should not that your comparison should return false becouse your env you set in the script contains space here `production &&` To be able to return true you should remove that space and your script should look like this. ``` "scripts": { "start:prod": "set NODE_ENV=production&& nodemon server.js" } ``` Keep your comparison expression it is ok for it.
62,675,738
Why when I compare those two I get `false` ``` const production = process.env.NODE_ENV === 'production'; ``` Even when I set `process.env.NODE_ENV` to production I still get `false` value. Why? Example: package.json SCRIPTS: ``` "scripts": { "start:prod": "set NODE_ENV=production && nodemon server.js" } ``` VS CODE ``` const production = process.env.NODE_ENV === 'production'; console.log(process.env.NODE_ENV); // production console.log(typeof process.env.NODE_ENV); // string console.log(typeof 'production'); // string console.log(production) // false ``` Why production returns false even though the values are exactly the same?
2020/07/01
[ "https://Stackoverflow.com/questions/62675738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9290485/" ]
To set node environment in powershell use below: $env:NODE\_ENV = 'production' drop the spaces before and after && ``` "scripts": { "start:prod": "set NODE_ENV=production&&nodemon server.js" } ```
It works if you use `includes` instead of the equality operator ``` process.env.NODE_ENV.includes('production') // true ```
62,675,745
``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { ?> <option value="<?php echo strip_tags($row2['country_code']); ?>"> <?php echo strip_tags($row2['country_name']); ?> </option>> <?php } } ?> </select> ``` Inside the while loop there are empty lines and white space. I wrote that option in multiple lines to make it look better but its outputting the white space. Is this what usually happens? If not then what is causing this here? When I console.log the selected country name, it includes the whitespace. ``` $( document ).ready(function() { var s = $("#selectedCountry" ).find(":selected").text(); console.log(s); ``` [my code editor](https://i.stack.imgur.com/kX1l8.png) [whitespace visible in html source](https://i.stack.imgur.com/WAvxH.png) [whitespace in country name](https://i.stack.imgur.com/k76to.png)
2020/07/01
[ "https://Stackoverflow.com/questions/62675745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846777/" ]
I've never actually cared about how pretty my source code is. If you want to have complete control over the source code spacing, then one way is to print each option within php tags and manually add the newline (`\n`) character. You can even add a tab (`\t`) before each option tag if you are so inclined. If your values *actually* do have excess leading and trailing spaces, then just use `trim()`, but I am not completely convinced that this is the case. Don't mix procedural with object-oriented mysqli syntax. I recommend that you stay with object-oriented it is more concise and easier to work with. The mysqli result is an iterable object -- you don't need to keep calling fetch functions. Only select exactly what you intend to use from the database. Code: ([Demo](https://3v4l.org/W9NAf)) ``` <select id="selectedCountry"> <?php if ($result = $conn->query("SELECT country_code, country_name FROM countries")) { foreach ($result as $row) { printf( "\t<option value=\"%s\">%s</option>\n", trim(htmlentities($row['country_code'])), trim(htmlentities($row['country_name'])) ); } } ?> </select> ```
Well you can remove those unwanted spaces using php native function `preg_replace('/\s+/', '', $row2['country_name']);` Here is complete code $row2['country\_name'] ``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { //Clear the data from html and remove unwante scape $countryName = preg_replace('/\s+/', ' ', strip_tags($row2['country_code'])); ?> <option value="<?php echo $countryName; ?>"> <?php echo $countryName; ?> </option>> <?php } } ?> </select> ``` Hope this help!
62,675,745
``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { ?> <option value="<?php echo strip_tags($row2['country_code']); ?>"> <?php echo strip_tags($row2['country_name']); ?> </option>> <?php } } ?> </select> ``` Inside the while loop there are empty lines and white space. I wrote that option in multiple lines to make it look better but its outputting the white space. Is this what usually happens? If not then what is causing this here? When I console.log the selected country name, it includes the whitespace. ``` $( document ).ready(function() { var s = $("#selectedCountry" ).find(":selected").text(); console.log(s); ``` [my code editor](https://i.stack.imgur.com/kX1l8.png) [whitespace visible in html source](https://i.stack.imgur.com/WAvxH.png) [whitespace in country name](https://i.stack.imgur.com/k76to.png)
2020/07/01
[ "https://Stackoverflow.com/questions/62675745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846777/" ]
Write the option in a single line. The white space will be gone. It is ussual behaviour to create white spaces when writing it on multiple lines.
Well you can remove those unwanted spaces using php native function `preg_replace('/\s+/', '', $row2['country_name']);` Here is complete code $row2['country\_name'] ``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { //Clear the data from html and remove unwante scape $countryName = preg_replace('/\s+/', ' ', strip_tags($row2['country_code'])); ?> <option value="<?php echo $countryName; ?>"> <?php echo $countryName; ?> </option>> <?php } } ?> </select> ``` Hope this help!
62,675,750
I am trying to prepare a query to insert new entries in a daily report into a separate table in a SQL Server database. I have two tables as follows: **Table 1** ``` id style_code location_code ------------------------------- 1 abcd IST 2 abcd DEL 3 wxyz DEL ``` **Table 2** ``` id style_code location_code -------------------------------- 1 abcd IST 2 wxyz IST 3 abcd DEL 4 wxyz DEL ``` I want to select all the rows in Table 2 where the combination of 'style\_code' and 'location\_id' DO NOT exist in Table 1. In this particular example, that would mean returning row 2 from Table 2 as 'wxyx & IST' do not exist in Table 1. (There is no relationship between the id columns. Table 2 is a temporary table) I have been trying to put a join into a Select query with NOT IN, but I cannot seem to get the query to work correctly. ``` SELECT * FROM [Table 2] WHERE style_code NOT IN (SELECT style_code FROM [Table 1] INNER JOIN [Table 2] ON [Table 2].location_code = [Table 1].location_code); ``` I have a beginners understanding of SQL coding, but am no expert, and would appreciate any guidance.
2020/07/01
[ "https://Stackoverflow.com/questions/62675750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846678/" ]
I've never actually cared about how pretty my source code is. If you want to have complete control over the source code spacing, then one way is to print each option within php tags and manually add the newline (`\n`) character. You can even add a tab (`\t`) before each option tag if you are so inclined. If your values *actually* do have excess leading and trailing spaces, then just use `trim()`, but I am not completely convinced that this is the case. Don't mix procedural with object-oriented mysqli syntax. I recommend that you stay with object-oriented it is more concise and easier to work with. The mysqli result is an iterable object -- you don't need to keep calling fetch functions. Only select exactly what you intend to use from the database. Code: ([Demo](https://3v4l.org/W9NAf)) ``` <select id="selectedCountry"> <?php if ($result = $conn->query("SELECT country_code, country_name FROM countries")) { foreach ($result as $row) { printf( "\t<option value=\"%s\">%s</option>\n", trim(htmlentities($row['country_code'])), trim(htmlentities($row['country_name'])) ); } } ?> </select> ```
Well you can remove those unwanted spaces using php native function `preg_replace('/\s+/', '', $row2['country_name']);` Here is complete code $row2['country\_name'] ``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { //Clear the data from html and remove unwante scape $countryName = preg_replace('/\s+/', ' ', strip_tags($row2['country_code'])); ?> <option value="<?php echo $countryName; ?>"> <?php echo $countryName; ?> </option>> <?php } } ?> </select> ``` Hope this help!
62,675,750
I am trying to prepare a query to insert new entries in a daily report into a separate table in a SQL Server database. I have two tables as follows: **Table 1** ``` id style_code location_code ------------------------------- 1 abcd IST 2 abcd DEL 3 wxyz DEL ``` **Table 2** ``` id style_code location_code -------------------------------- 1 abcd IST 2 wxyz IST 3 abcd DEL 4 wxyz DEL ``` I want to select all the rows in Table 2 where the combination of 'style\_code' and 'location\_id' DO NOT exist in Table 1. In this particular example, that would mean returning row 2 from Table 2 as 'wxyx & IST' do not exist in Table 1. (There is no relationship between the id columns. Table 2 is a temporary table) I have been trying to put a join into a Select query with NOT IN, but I cannot seem to get the query to work correctly. ``` SELECT * FROM [Table 2] WHERE style_code NOT IN (SELECT style_code FROM [Table 1] INNER JOIN [Table 2] ON [Table 2].location_code = [Table 1].location_code); ``` I have a beginners understanding of SQL coding, but am no expert, and would appreciate any guidance.
2020/07/01
[ "https://Stackoverflow.com/questions/62675750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846678/" ]
Write the option in a single line. The white space will be gone. It is ussual behaviour to create white spaces when writing it on multiple lines.
Well you can remove those unwanted spaces using php native function `preg_replace('/\s+/', '', $row2['country_name']);` Here is complete code $row2['country\_name'] ``` <select id="selectedCountry"> <?php /*FETCH COUNTRY*/ $sql2 = "SELECT * FROM countries"; if($result2 = $conn -> query($sql2)) { while($row2 = mysqli_fetch_assoc($result2)) { //Clear the data from html and remove unwante scape $countryName = preg_replace('/\s+/', ' ', strip_tags($row2['country_code'])); ?> <option value="<?php echo $countryName; ?>"> <?php echo $countryName; ?> </option>> <?php } } ?> </select> ``` Hope this help!
62,675,759
I have a collection of users. Each user has an array named `favoriteNames`. ``` root <- [Firestore] users <- [Collection] uid <- [Document] favoriteNames: ["Ana", "Jane", "Dave"] <- [Array] uid<- [Document] favoriteNames: ["Ana", "Merry", "John"] <- [Array] ``` I want to remove "Ana" from all documents. This is what I tried: ``` usersRef.whereArrayContains("favoriteNames", FieldValue.arrayRemove("Ana").get() ``` But it doesn't work. How can I remove "Ana" from all arrays?
2020/07/01
[ "https://Stackoverflow.com/questions/62675759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9941990/" ]
It's not possible to do a SQL-like "update where" type query with Firestore. If you have multiple documents to update, you will have to update each one individually. You can use a transaction or batch write to do all the updates atomically, if needed. Minimally, what you will have to do here is: 1. Query for all the documents to update. This will involve doing an [array-contains query](https://firebase.google.com/docs/firestore/query-data/queries#array_membership) for all "Ana". ``` usersRef.whereArrayContains("favoriteNames", "Ana") ``` 2. Iterate the query results. 3. For each matching [DocumentSnapshot](https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/DocumentSnapshot), get its DocumentReference and use `update()` with `FieldValue.arrayRemove("Ana")` ``` snapshot.reference.update("favoriteNames", FieldValue.arrayRemove("Ana")) ```
If I'm not getting your question wrong you want from one array remove a specific item, in this case "Ana". I've created a sample doing so, let me know if is what you are looking for. ``` fun main() { val a = arrayListOf("Joan", "Skizo", "Ana") val b = a.minus("Ana") print(b) } ``` What is printing is > > [Joan, Skizo] > > > To do so, you can use [`minus`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/minus.html) > > Returns a list containing all elements of the original collection without the first occurrence of the given element. > > > Regarding this [`answer`](https://stackoverflow.com/questions/59694940/fieldvalue-arrayremove-to-remove-an-object-from-array-of-objects-based-on-prop) you'd have to read, write what you want and then remove them from the list and then write it back. Also this might help you with the `update` method I'd say : <https://googleapis.dev/nodejs/firestore/latest/FieldValue.html#arrayRemove-examples> This question might help you also [remove array elements in firebase](https://stackoverflow.com/a/51983589/4385913)
62,675,763
A common thing I want to do is yank `"some text"` and then use it to change `"some other text"`. So I cursor to some text and then `yi"` to grab the `some text`. How do I now replace `some other text`? If I do `di"` then my copy paste register gets overwritten with `some other text`. I know I can use named registers, but my problem is my muscle memory has already done `yi"`. Is there any way I can override the default behaviour of either `y` or `d`?
2020/07/01
[ "https://Stackoverflow.com/questions/62675763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1000753/" ]
"Unnamed" register in Vim is not a real register, but a pointer to a register last used. It's even implemented in Vim's source code as a pointer (or to be more precise, as an index into register array). So the yanked text does not get really overwritten by "delete" command, as "yank" by default uses register "zero", while "delete" uses either "one" or "minus". Hence you can always put the *last yanked* text by pressing `"``0``p`.
You could remap the `d` key to perform a deletion into the blackhole register `"_`: ``` nnoremap d "_d ``` You need to use one of the `noremap` versions such that it doesn't go into an infinite loop.
62,675,784
**Context** I'm trying to write a dataframe using PySpark to .csv. In other posts, I've seen users question this, but I need a .csv for business requirements. **What I've Tried** Almost everything. I've tried .repartition(), I've tried increasing driver memory to 1T. I also tried caching my data first and then writing to csv(which is why the screenshots below indicate I'm trying to cache vs. write out to csv) Nothing seems to work. **What Happens** So, the UI does not show that any tasks fail. The job--whether it's writing to csv or caching first, gets close to completion and just hangs. **Screenshots** ![enter image description here](https://i.stack.imgur.com/n2HDW.png) Then..if I drill down into the job.. ![enter image description here](https://i.stack.imgur.com/zHQq6.png) And if I drill down further ![enter image description here](https://i.stack.imgur.com/uM6hG.png) Finally, here are my settings: ![enter image description here](https://i.stack.imgur.com/4CkeK.png)
2020/07/01
[ "https://Stackoverflow.com/questions/62675784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846785/" ]
You don't need to cache the dataframe as cache helps when there are multiple actions performed and if not required I would suggest you to remove count also.. Now while saving the dataframe make sure all the executors are being used. If your dataframe is of 50 gb make sure you are not creating multiple small files as it will degrade the performance. You can repartition the data before saving so if your dataframe have a column whic equally divides the dataframe use that or find optimum number to repartition. ``` df.repartition('col', 10).write.csv() Or #you have 32 executors with 12 cores each so repartition accordingly df.repartition(300).write.csv() ```
As you are using databricks.. can you try Using the databricks-csv package and let us know ``` from pyspark.sql import SQLContext sqlContext = SQLContext(sc) df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('file.csv') train.write.format('com.databricks.spark.csv').save('file_after_processing.csv') ```
62,675,789
Does Databricks support submitting a SparkSQL job similar to Google Cloud Dataproc? The Databricks Job API, doesn't seem to have an option for submitting a Spark SQL job. Reference: <https://docs.databricks.com/dev-tools/api/latest/jobs.html> <https://cloud.google.com/dataproc/docs/reference/rest/v1beta2/projects.regions.jobs>
2020/07/01
[ "https://Stackoverflow.com/questions/62675789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/268850/" ]
No, you submit a notebook. That notebook can be many things: python, spark script or with %sql Spark SQL.
You can submit the spark job on databricks cluster just like the dataproc. Run your spark job in scala context and create a jar for the same. Submitting spark-sql directly is not supported. To create a job follow the official guide <https://docs.databricks.com/jobs.html> Also, to trigger the job using REST API you can trigger the run-now request described <https://docs.databricks.com/dev-tools/api/latest/jobs.html#runs-submit>
62,675,817
I checked the log cat in android studio and these are the errors it displays ``` Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'android.hardware.Camera$Parameters android.hardware.Camera.getParameters()'on a null object reference at .easyCamView.takePicture(easyCamView.java:146) ``` Actual Code: ``` mPictureFileName = fileName; Parameters params = mCamera.getParameters(); ``` ``` java.lang.RuntimeException: Fail to connect to camera service at .easyCamView.surfaceCreated(easyCamView.java:191) ``` ``` Intent openIntent = new Intent(); openIntent.setAction(Intent.ACTION_VIEW); mContext.startActivity(openIntent); mCamera = Camera.open(); ``` at easyCamActivity.captureScreen(easyCamActivity.java:179) `mCameraView.takePicture(fileName);` I am able to compile the code but it crashes whenever I run it. if I can get some help that would be great.
2020/07/01
[ "https://Stackoverflow.com/questions/62675817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3858803/" ]
No, you submit a notebook. That notebook can be many things: python, spark script or with %sql Spark SQL.
You can submit the spark job on databricks cluster just like the dataproc. Run your spark job in scala context and create a jar for the same. Submitting spark-sql directly is not supported. To create a job follow the official guide <https://docs.databricks.com/jobs.html> Also, to trigger the job using REST API you can trigger the run-now request described <https://docs.databricks.com/dev-tools/api/latest/jobs.html#runs-submit>
62,675,859
``` (def mine '(a b c)) (def yours '(a b)) (remove yours mine) ; should return (c) ``` I was advised to use remove in another thread but it doesn't work. Anyone can advise?
2020/07/01
[ "https://Stackoverflow.com/questions/62675859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3710760/" ]
Assuming you want to remove from `mine` every element that also exists in `yours`, there are a few approaches. You can convert the lists to sets and use [difference](https://clojuredocs.org/clojure.set/difference), like this: ``` (require '[clojure.set :refer [difference]]) (def mine '(a b c)) (def yours '(a b)) (difference (set mine) (set yours)) ;; => #{c} ``` But this does not preserve the order of elements that remain from `mine`. If you want to preserve the order, you can instead use [remove](https://clojuredocs.org/clojure.core/remove). To do that, we first define a `yours?` predicate that will return true for an element iff that element occurs in `yours`: ``` (def yours-set (set yours)) (def yours? (partial contains? yours-set)) ``` If our set `yours` only contains *truthy* values like that are neither `nil` nor `false`, we could define it like `(def yours? (set yours))` since a set implements the [IFn](https://clojure.github.io/clojure/javadoc/clojure/lang/IFn.html) interface but this approach will not work if `yours` contains elements such as `nil` or `false` as @amalloy pointed out. ``` (remove yours? mine) ;; => (c) ``` The above code means that we remove every element from `mine` for which `yours?` evaluates to true. Yet another, more verbose approach, is to use the opposite of remove, that is [filter](https://clojuredocs.org/clojure.core/filter), and passing in the opposite of the predicate. ``` (filter (complement yours?) mine) ;; => (c) ``` but I see no gain of that more verbose approach here. If you know that you want a vector as a result, you can instead use [into](https://clojuredocs.org/clojure.core/into), passing in a `remove`ing transducer as argument. ``` (into [] (remove yours?) mine) ;; => [c] ```
`contains?` is a horrible misnamer in Clojure in my view. (No, it does NOT check, whether an element is contained in a collection! It just checks whether for a key (an index or key) there exist a value in the collection.) Use `some` in combination with `#{}` to check for membership in a collection! ``` (remove #(some #{%} yours) mine) ;; => (c) ``` Or: ``` (defn in? [coll x] (some #{x} coll)) (remove #(in? yours %) mine) ;; => {c} ```
62,675,874
In Xcode 12 / iOS 14, OSLog gained support for string interpolation (yay!). But it's still not possible to attach hooks to easily log to other channels, such as Crashlytics. So I figured I'll just make a simple wrapper and pass on the parameters. However, there seems to be some magic happening regarding string interpolation. The new Logger class provided, it takes a `OSLogMessage` as parameter and can be used as follows: ``` let someVar = "some var" let logger = Logger(subsystem: "com.my.app", category: "UI") logger.error("some message") logger.error("some message with default var: \(someVar)") logger.error("some message with private var: \(someVar, privacy: .private)") logger.error("some message with private var: \(someVar, privacy: .private(mask: .hash))") logger.error("some message with public var: \(someVar, privacy: .public)") ``` --- Wrapping the new Logger struct ============================== So lets just wrap this in a struct: ``` struct MyLogger { let logger = Logger(subsystem: "com.my.app", category: "UI") func error(_ message: OSLogMessage) { logger.error(message) } } ``` Same signature, but unfortunately, the compiler won't allow this: ``` ERROR: Argument must be a string interpolation ``` Furthermore, trying to call my struct also causes a weirdly specific compiler error: ``` let logger = MyLogger() let value = "value" logger.error("Some log message \(value, privacy: .public)") ``` Yields: ``` String interpolation cannot be used in this context; if you are calling an os_log function, try a different overload ``` Directly calling `os_log(_: OSLogMessage)` instead of the new struct gives the same result. Is there a way to work around this? Am I missing something?
2020/07/01
[ "https://Stackoverflow.com/questions/62675874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/666211/" ]
The logging APIs use special compiler features to evaluate the privacy level at compile time. As the diagnostic says, you must use a static (i.e., known at compile time) method or property of ‘OSLogPrivacy’; it can’t be a variable that’s evaluated at run time. The implication is that you can’t create your own wrapper for these APIs without using compiler-internal features. [Apple Forums](https://forums.swift.org/t/argument-must-be-a-static-method-or-property-of-oslogprivacy/38441)
I just use something like this to workaround the limitation: ``` struct MyLogger { let logger = Logger(subsystem: "com.my.app", category: "UI") func error(_ message: String) { logger.error("\(message)") } } ```
62,675,880
I use bootstrap4 to create a mobile first responsive layout with elements with different positions and have various rows and columns set up that I adjust for the different categories of screen size. I am wondering if there are pure bootstrap styling classes that would allow me to apply and remove position (absolute, fixed, relative) for the different sizes, without having to create my own css media queries. For example, if I wanted to have a container that on mobile becomes fixed and on desktop becomes absolute only on medium size screen... ``` <div id="back-to-top" class="position-fixed position-sm-absolute position-md-relative"> <a href="#"> <i class="fa fa-chevron-up"></i> </a> </div> ``` I'm also using the wordpress with css so can't get into the sass easily. Any suggestions?
2020/07/01
[ "https://Stackoverflow.com/questions/62675880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846802/" ]
For anyone who comes across this at a later date. These are the two arrays needed to make the loop function. (and yes, works for both BS-4 & BS-5 ) ``` $grid-breakpoints: ( xs: 0, sm: 576px, md: 768px, lg: 992px, xl: 1200px, xxl: 1400px ); $positions: ( static : static, absolute : absolute, relative : relative, fixed : fixed, sticky : sticky, ); ``` No, unfortunately there are no original bootstrap media classes for apply and remove position. Own rules are needed to override originals Bootstrap 4 classes definitions of position. Same issue here. <https://getbootstrap.com/> to enable this functionality here is the code snippet for SCSS: ``` @each $breakpoint in map-keys($grid-breakpoints) { @include media-breakpoint-up($breakpoint) { $infix: breakpoint-infix($breakpoint, $grid-breakpoints); // Common values @each $position in $positions { .position#{$infix}-#{$position} { position: $position !important; } } } } ``` and compiled version for CSS: ``` @media (min-width: 576px) { .position-sm-static { position: static !important; } .position-sm-relative { position: relative !important; } .position-sm-absolute { position: absolute !important; } .position-sm-fixed { position: fixed !important; } .position-sm-sticky { position: sticky !important; } } @media (min-width: 768px) { .position-md-static { position: static !important; } .position-md-relative { position: relative !important; } .position-md-absolute { position: absolute !important; } .position-md-fixed { position: fixed !important; } .position-md-sticky { position: sticky !important; } } @media (min-width: 992px) { .position-lg-static { position: static !important; } .position-lg-relative { position: relative !important; } .position-lg-absolute { position: absolute !important; } .position-lg-fixed { position: fixed !important; } .position-lg-sticky { position: sticky !important; } } @media (min-width: 1200px) { .position-xl-static { position: static !important; } .position-xl-relative { position: relative !important; } .position-xl-absolute { position: absolute !important; } .position-xl-fixed { position: fixed !important; } .position-xl-sticky { position: sticky !important; } } ```
Based on @Max Kaps's answer, I tested with BS5 and it is working as well. ``` @media (min-width: 576px) { .position-sm-static { position: static !important; } .position-sm-relative { position: relative !important; } .position-sm-absolute { position: absolute !important; } .position-sm-fixed { position: fixed !important; } .position-sm-sticky { position: sticky !important; } } @media (min-width: 768px) { .position-md-static { position: static !important; } .position-md-relative { position: relative !important; } .position-md-absolute { position: absolute !important; } .position-md-fixed { position: fixed !important; } .position-md-sticky { position: sticky !important; } } @media (min-width: 992px) { .position-lg-static { position: static !important; } .position-lg-relative { position: relative !important; } .position-lg-absolute { position: absolute !important; } .position-lg-fixed { position: fixed !important; } .position-lg-sticky { position: sticky !important; } } @media (min-width: 1200px) { .position-xl-static { position: static !important; } .position-xl-relative { position: relative !important; } .position-xl-absolute { position: absolute !important; } .position-xl-fixed { position: fixed !important; } .position-xl-sticky { position: sticky !important; } } ```
62,675,880
I use bootstrap4 to create a mobile first responsive layout with elements with different positions and have various rows and columns set up that I adjust for the different categories of screen size. I am wondering if there are pure bootstrap styling classes that would allow me to apply and remove position (absolute, fixed, relative) for the different sizes, without having to create my own css media queries. For example, if I wanted to have a container that on mobile becomes fixed and on desktop becomes absolute only on medium size screen... ``` <div id="back-to-top" class="position-fixed position-sm-absolute position-md-relative"> <a href="#"> <i class="fa fa-chevron-up"></i> </a> </div> ``` I'm also using the wordpress with css so can't get into the sass easily. Any suggestions?
2020/07/01
[ "https://Stackoverflow.com/questions/62675880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846802/" ]
For anyone who comes across this at a later date. These are the two arrays needed to make the loop function. (and yes, works for both BS-4 & BS-5 ) ``` $grid-breakpoints: ( xs: 0, sm: 576px, md: 768px, lg: 992px, xl: 1200px, xxl: 1400px ); $positions: ( static : static, absolute : absolute, relative : relative, fixed : fixed, sticky : sticky, ); ``` No, unfortunately there are no original bootstrap media classes for apply and remove position. Own rules are needed to override originals Bootstrap 4 classes definitions of position. Same issue here. <https://getbootstrap.com/> to enable this functionality here is the code snippet for SCSS: ``` @each $breakpoint in map-keys($grid-breakpoints) { @include media-breakpoint-up($breakpoint) { $infix: breakpoint-infix($breakpoint, $grid-breakpoints); // Common values @each $position in $positions { .position#{$infix}-#{$position} { position: $position !important; } } } } ``` and compiled version for CSS: ``` @media (min-width: 576px) { .position-sm-static { position: static !important; } .position-sm-relative { position: relative !important; } .position-sm-absolute { position: absolute !important; } .position-sm-fixed { position: fixed !important; } .position-sm-sticky { position: sticky !important; } } @media (min-width: 768px) { .position-md-static { position: static !important; } .position-md-relative { position: relative !important; } .position-md-absolute { position: absolute !important; } .position-md-fixed { position: fixed !important; } .position-md-sticky { position: sticky !important; } } @media (min-width: 992px) { .position-lg-static { position: static !important; } .position-lg-relative { position: relative !important; } .position-lg-absolute { position: absolute !important; } .position-lg-fixed { position: fixed !important; } .position-lg-sticky { position: sticky !important; } } @media (min-width: 1200px) { .position-xl-static { position: static !important; } .position-xl-relative { position: relative !important; } .position-xl-absolute { position: absolute !important; } .position-xl-fixed { position: fixed !important; } .position-xl-sticky { position: sticky !important; } } ```
Based on @Max Kaps's answer, If you're using SCSS, you should also add the $positions variable! ```css $positions: ( fixed, absolute, relative, sticky, static ); ```
62,675,880
I use bootstrap4 to create a mobile first responsive layout with elements with different positions and have various rows and columns set up that I adjust for the different categories of screen size. I am wondering if there are pure bootstrap styling classes that would allow me to apply and remove position (absolute, fixed, relative) for the different sizes, without having to create my own css media queries. For example, if I wanted to have a container that on mobile becomes fixed and on desktop becomes absolute only on medium size screen... ``` <div id="back-to-top" class="position-fixed position-sm-absolute position-md-relative"> <a href="#"> <i class="fa fa-chevron-up"></i> </a> </div> ``` I'm also using the wordpress with css so can't get into the sass easily. Any suggestions?
2020/07/01
[ "https://Stackoverflow.com/questions/62675880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13846802/" ]
Based on @Max Kaps's answer, I tested with BS5 and it is working as well. ``` @media (min-width: 576px) { .position-sm-static { position: static !important; } .position-sm-relative { position: relative !important; } .position-sm-absolute { position: absolute !important; } .position-sm-fixed { position: fixed !important; } .position-sm-sticky { position: sticky !important; } } @media (min-width: 768px) { .position-md-static { position: static !important; } .position-md-relative { position: relative !important; } .position-md-absolute { position: absolute !important; } .position-md-fixed { position: fixed !important; } .position-md-sticky { position: sticky !important; } } @media (min-width: 992px) { .position-lg-static { position: static !important; } .position-lg-relative { position: relative !important; } .position-lg-absolute { position: absolute !important; } .position-lg-fixed { position: fixed !important; } .position-lg-sticky { position: sticky !important; } } @media (min-width: 1200px) { .position-xl-static { position: static !important; } .position-xl-relative { position: relative !important; } .position-xl-absolute { position: absolute !important; } .position-xl-fixed { position: fixed !important; } .position-xl-sticky { position: sticky !important; } } ```
Based on @Max Kaps's answer, If you're using SCSS, you should also add the $positions variable! ```css $positions: ( fixed, absolute, relative, sticky, static ); ```
62,675,926
**Those are the fields on my DB** * `partamento int` * `codigocurso int` * `diurno int` * `contacto int` * `pos_laboral int` * `contacto2 int` * `proc_por int` I’ve been trying to fix this error since yesterday but I cannot figure out where is the error. \*\* ``` foreach ($diurno as $userId) { $data .= "(".$id.",".$grdid.",".$userId.",".$contacto.",".$pos_laboral.",".$contacto2.",".$idd.")"; } $data = rtrim($data, ','); $sql = "insert into cursosprogramas (departamento, codigocurso, diurno, contacto, pos_laboral, contacto2, proc_por) values (".$data.");"; echo $sql; ``` **the error** > > insert into cursosprogramas (departamento, codigocurso, diurno, contacto, pos\_laboral, contacto2, proc\_por) values ((100,120,7,646,5,363,2)(100,120,4,646,5,363,2));Query failed. > > >
2020/07/01
[ "https://Stackoverflow.com/questions/62675926", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12891375/" ]
You are missing a comma between data sets, and you also don't need parenthesis around them all. You are already using `rtrim()` to remove the very last comma, but you aren't actually adding the comma at the end of the data set. ``` foreach ($diurno as $userId) { //add a comma at the end //note you can write variables directly into a string that is wrapped with double quotes $data .= "('$id', '$grdid', '$userId', '$contacto', '$pos_laboral', '$contacto2', '$idd'),"; } //this gets rid of the very last comma in the string $data = rtrim($data, ','); //remove parenthesis around `$data` $sql = "insert into cursosprogramas (departamento, codigocurso, diurno, contacto, pos_laboral, contacto2, proc_por) values {$data};"; echo $sql; ``` NOTE: [Little Bobby](http://bobby-tables.com/) says [this code MAY be at risk for SQL Injection Attacks](https://stackoverflow.com/q/60174/) depending on how the variables inside of `$data` are created. Learn about [Prepared Statements](https://en.wikipedia.org/wiki/Prepared_statement) with [parameterized queries](https://stackoverflow.com/a/4712113/5827005).
You need a comma between the value tuples. Try a comma at the end of line two like this: ``` foreach ($diurno as $userId) { $data .= "(".$id.",".$grdid.",".$userId.",".$contacto.",".$pos_laboral.",".$contacto2.",".$idd."),"; } $data = rtrim($data, ','); $sql = "insert into cursosprogramas (departamento, codigocurso, diurno, contacto, pos_laboral, contacto2, proc_por) values (".$data.");"; echo $sql; ```
62,675,929
I using the following POWER SHELL script, to extract ( to csv ) managers name , from a "Manager" user attribute. ``` #This script, , Exports the Manager name of the employee`s in the TXT file. # users.txt file - contains a simply list of user names ( samaccount-names ) Get-Content D:\powershell\permmisions\Users.txt | Foreach-Object { Get-ADUser -Identity $_ -Properties Manager | Select-Object name, Manager | Export-Csv D:\Powershell\ADuserinformation\Export-Managers-of-specific-users.csv -Append } ``` The challenge i am facing, is when is on the exported CSV file, the list "SKIPS" blank value-fields,In case there is no manager set for the user. And a ROWS is not created , where MANAGER is missing. What i would like to do , is the script to enter a charcter ( ~ ) for example, where, value is blank. That way , a row will be created for the blank MANAGER value, on the CSV file Please help , Thanks all in advance.
2020/07/01
[ "https://Stackoverflow.com/questions/62675929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10787849/" ]
Note: At least the `Name` property should exist on all AD users retrieved, so you *would* get a row even for users where `Manager` is empty, but with an empty `Manager` column. If you do need to deal with possibly not all users named in `Users.txt` actually existing, see [Theo's helpful answer](https://stackoverflow.com/a/62676755/45375). The simplest approach is to use a [calculated property](https://stackoverflow.com/a/39861920/45375): ``` Get-ADUser -Identity $_ -Properties Manager | Select-Object Name, @{ Name='Manager'; Expression={ if ($_.Manager) { $_.Manager } else { '~' } } } ``` Note: * It is common to abbreviate the key names of the hashtable that defines the calculated property to `n` and `e`. * The `if` statement takes advantage of the fact that an *empty string* (or `$null`) evaluates to `$false` in a Boolean context; for an overview of PowerShell's implicit to-Boolean conversion, see the bottom section of [this answer](https://stackoverflow.com/a/53108138/45375). --- In *PowerShell [Core] 7.0 or above*, you could additionally take advantage of the [ternary operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#ternary-operator--if-true--if-false) (`<condition> ? <valueIfTrue> : <valueIfFalse>`) to further shorten the command: ``` # PSv7+ Get-ADUser -Identity $_ -Properties Manager | Select-Object Name, @{ n='Manager'; e={ $_.Manager ? $_.Manager : '~' } } ``` Note: If `$_.Manager` were to return `$null` rather than the *empty string* (`''`) if no manager is assigned, you could use `??`, the PSv7+ [null-coalescing operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators) instead: `$_.Manager ?? '~'`
Not concise at all, but this allows you to insert more properties of interest in your report, and does some error-checking if the user listed in your input file does not exist: ``` $report = foreach ($account in (Get-Content D:\powershell\permmisions\Users.txt)) { $user = Get-ADUser -Filter "SamAccountName -eq '$account'" -Properties Manager -ErrorAction SilentlyContinue if ($user) { if (!$user.Manager) { $mgr = '~' } else { # the Manager property is the DistinghuishedName for the manager. # if you want that in your report, just do $mgr = $user.Manager # if you want the Name for instance of that manager in your report, # comment out the above line and do this instead: # $mgr = (Get-ADUser -Identity $user.Manager).Name } # now output an object [PsCustomObject]@{ UserName = $user.Name Manager = $mgr } } else { Write-Warning "User '$account' does not exist" } } # output on screen $report | Format-Table -AutoSize # output to CSV file $report | Export-Csv -Path 'D:\Powershell\ADuserinformation\Export-Managers-of-specific-users.csv' -NoTypeInformation ```
62,675,936
I have the following function to check number of missing values for different columns: ``` shape = df.shape df_miss = df.isnull().sum() x = df_miss.index y = df_miss z = [] for i in x: if y[i] > 0: z.append(i) print(z) plt.bar(z,y[z]) ``` Now I would like to use the same function to find the cases/observations with missing values.
2020/07/01
[ "https://Stackoverflow.com/questions/62675936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11738437/" ]
Note: At least the `Name` property should exist on all AD users retrieved, so you *would* get a row even for users where `Manager` is empty, but with an empty `Manager` column. If you do need to deal with possibly not all users named in `Users.txt` actually existing, see [Theo's helpful answer](https://stackoverflow.com/a/62676755/45375). The simplest approach is to use a [calculated property](https://stackoverflow.com/a/39861920/45375): ``` Get-ADUser -Identity $_ -Properties Manager | Select-Object Name, @{ Name='Manager'; Expression={ if ($_.Manager) { $_.Manager } else { '~' } } } ``` Note: * It is common to abbreviate the key names of the hashtable that defines the calculated property to `n` and `e`. * The `if` statement takes advantage of the fact that an *empty string* (or `$null`) evaluates to `$false` in a Boolean context; for an overview of PowerShell's implicit to-Boolean conversion, see the bottom section of [this answer](https://stackoverflow.com/a/53108138/45375). --- In *PowerShell [Core] 7.0 or above*, you could additionally take advantage of the [ternary operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#ternary-operator--if-true--if-false) (`<condition> ? <valueIfTrue> : <valueIfFalse>`) to further shorten the command: ``` # PSv7+ Get-ADUser -Identity $_ -Properties Manager | Select-Object Name, @{ n='Manager'; e={ $_.Manager ? $_.Manager : '~' } } ``` Note: If `$_.Manager` were to return `$null` rather than the *empty string* (`''`) if no manager is assigned, you could use `??`, the PSv7+ [null-coalescing operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators) instead: `$_.Manager ?? '~'`
Not concise at all, but this allows you to insert more properties of interest in your report, and does some error-checking if the user listed in your input file does not exist: ``` $report = foreach ($account in (Get-Content D:\powershell\permmisions\Users.txt)) { $user = Get-ADUser -Filter "SamAccountName -eq '$account'" -Properties Manager -ErrorAction SilentlyContinue if ($user) { if (!$user.Manager) { $mgr = '~' } else { # the Manager property is the DistinghuishedName for the manager. # if you want that in your report, just do $mgr = $user.Manager # if you want the Name for instance of that manager in your report, # comment out the above line and do this instead: # $mgr = (Get-ADUser -Identity $user.Manager).Name } # now output an object [PsCustomObject]@{ UserName = $user.Name Manager = $mgr } } else { Write-Warning "User '$account' does not exist" } } # output on screen $report | Format-Table -AutoSize # output to CSV file $report | Export-Csv -Path 'D:\Powershell\ADuserinformation\Export-Managers-of-specific-users.csv' -NoTypeInformation ```
62,675,964
I want to create some objects that can delegate some work to its nested subobjects, but it pushes me into the circular dependency issues. Using of `#ifndef` directive works fine if I have only two classes (`ClassA` and `ClassB`), but this pattern doesn't work when `ClassC` is added. Is it possible to achieve such type of structure as shown in the code below and don't get an "undefined type" errors? **ClassA.h** ``` #pragma once #include "CoreMinimal.h" #include "ClassB.h" class UClassB; #include ClassA.generated.h UCLASS() class PROJ_API UClassA : public UObject { GENERATED_BODY() UPROPERTY() UClassB* ObjB; public: void DoDelegation() { auto* ThisInstance = this; ObjB = NewObject<UClassB>(); ObjB->DoWorkClassB(ThisInstance); } } ``` **ClassB.h** ``` #pragma once #include "CoreMinimal.h" //works nice #ifndef CLASSA_H #define CLASSA_H #include "ClassA.h" class UClassA; #endif //trying to use similar pattern which works with ClassA.h and ClassB.h #include "ClassC.h" class UClassC; #include ClassB.generated.h UCLASS() class PROJ_API UClassB : public UObject { GENERATED_BODY() UPROPERTY() UClassC* ObjC; public: void DoDelegation() { auto* ThisInstance = this; ObjC = NewObject<UClassC>(); ObjC->DoWorkClassC(ThisInstance); } void DoWorkClassB(UClassA* &ObjectClassA) { // do some stuff with ObjectClassA } } ``` **ClassC.h** ```cpp #pragma once #include "CoreMinimal.h" //trying to use similar pattern which works with ClassA.h and ClassB.h //got "undefined type" error #ifndef CLASSB_H #define CLASSB_H #include "ClassB.h" class UClassB; #endif #include ClassC.generated.h UCLASS() class PROJ_API UClassC : public UObject { GENERATED_BODY() public: void DoWorkClassC(UClassB* &ObjectClassB) { // do some stuff with ObjectClassB } } ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11939304/" ]
> > creating classes that refer to each other > > > > > Is it possible to achieve such type of structure as shown in the code below and don't get an "undefined type" errors? > > > Certainly. Referring to an object (with a pointer for example) of some class only requires declaration of the other class, not definition. Simple solution is to declare both classes before defining either of them.
It's still not fully clear for me, but at least i understood why forward declaration wasn't worked for the first time. Inline implementation of methods that do something with such referencing classes is strictly not recommended. It compiles well If function is implemented in cpp.
62,675,995
I have a strange issue with my python monitoring script:- I've written a script with a number of alerts for any server. In that I have a function that gathers Network bytes/sec in and out. Now the issue is when I print the alert outside my mail function it prints the current output, but for some reason when it triggers the mail for the alert, the mail body is empty. If I trigger the mail with another alert which isn't in the Network function it works properly. Also is there a way to get smtplib to use port 587 instead of 465, any pointers on formatting the alert would be appreciated too. Please find my script below:- ``` #!/usr/bin/env python3 #Module psutil needs to be installed via pip3 first. #Python script to Monitor Server Resources. import time import psutil import smtplib from email.message import EmailMessage project_and_instance_name = 'test-stage' #Edit the name of the project name and environment sender = '<sender email>' #Email Address of the sender receivers = ['recepient email'] #comma seperated list of recipients enclosed in '' cpu_thresh = 50.0 cpu_pct = psutil.cpu_percent(interval=1) if cpu_pct >= cpu_thresh: cpu_alert = "CPU Warning, CPU at ",cpu_pct, "percent" else: cpu_alert = "" mem = psutil.virtual_memory() mem_thresh = 1024 * 1024 * 1024 #change the end value to choose the amount of MB if mem_thresh >= mem.available: mem_alert = "Memory Usage Warning only", round((mem.available /1024 /1024), 2), "MB available" else: mem_alert = "" partition1 = '/' disk1 = psutil.disk_usage(partition1) disk_thresh = 85.0 if disk_thresh <= disk1[3]: disk_alert = f"Root volume usage warning {disk1[3]} % used" else: disk_alert = "" def net_usage(inf = "eth0"): #change the inf variable according to the interface global net_in_alert global net_out_alert net_in_ps1 = psutil.net_io_counters(pernic=True, nowrap=True)[inf] net_in_1 = net_in_ps1.bytes_recv net_out_1 = net_in_ps1.bytes_sent time.sleep(1) net_in_ps2 = psutil.net_io_counters(pernic=True, nowrap=True)[inf] net_in_2 = net_in_ps2.bytes_recv net_out_2 = net_in_ps2.bytes_sent net_in_res = round((net_in_2 - net_in_1) /1024 /1024, 2) net_out_res = round((net_out_2 - net_out_1) /1024 /1024, 2) net_in_thresh = 1.5 net_out_thresh = 1.5 if net_in_res >= net_in_thresh: net_in_alert = f"Current net-usage:IN: {net_in_res} MB/s" else: net_in_alert = "" if net_out_res <= net_out_thresh: net_out_alert = f"Current net-usage:OUT: {net_out_res} MB/s" else: net_out_alert = "" net_usage() message_list = [] if cpu_alert == "" : pass else: message_list.append(cpu_alert) if mem_alert == "" : pass else: message_list.append(mem_alert) if disk_alert == "" : pass else: message_list.append(disk_alert) if net_in_alert == "" : pass else: message_list.append(net_in_alert) if net_out_alert == "" : pass else: message_list.append(net_out_alert) msg = '\n'.join(message_list) print(msg) def alerts(): server = smtplib.SMTP_SSL('smtp.gmail.com', 465) server.login(sender, "<password>") server.sendmail(sender,receivers,msg) if msg == "": pass else: alerts() ```
2020/07/01
[ "https://Stackoverflow.com/questions/62675995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13284980/" ]
> > creating classes that refer to each other > > > > > Is it possible to achieve such type of structure as shown in the code below and don't get an "undefined type" errors? > > > Certainly. Referring to an object (with a pointer for example) of some class only requires declaration of the other class, not definition. Simple solution is to declare both classes before defining either of them.
It's still not fully clear for me, but at least i understood why forward declaration wasn't worked for the first time. Inline implementation of methods that do something with such referencing classes is strictly not recommended. It compiles well If function is implemented in cpp.
62,675,999
For Example If I have a Column as given below by calling and showing the CSV in Pyspark ``` +--------+ | Names| +--------+ |Rahul | |Ravi | |Raghu | |Romeo | +--------+ ``` if I specify in my functions as Such Length = 2 Maxsplit = 3 Then I have to get the results as ``` +----------+-----------+----------+ |Col_1 |Col_2 |Col_3 | +----------+-----------+----------+ | Ra | hu | l | | Ra | vi | Null | | Ra | gh | u | | Ro | me | o | +----------+-----------+----------+ ``` Simirarly in Pyspark Length = 3 Max split = 2 it should provide me the output such as ``` +----------+-----------+ |Col_1 |Col_2 | +----------+-----------+ | Rah | ul | | Rav | i | | Rag | hu | | Rom | eo | +----------+-----------+ ``` This is how it should look like, Thank you
2020/07/01
[ "https://Stackoverflow.com/questions/62675999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11949957/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this, ``` import pyspark.sql.functions as F tst = sqlContext.createDataFrame([("Raghu",1),("Ravi",2),("Rahul",3)],schema=["Name","val"]) def fn (split,max_n,tst): for i in range(max_n): tst_loop=tst.withColumn("coln"+str(i),F.substring(F.col("Name"),(i*split)+1,split)) tst=tst_loop return(tst) tst_res = fn(3,2,tst) ``` The for loop can also replaced by a list comprehension or reduce, but i felt in you case, a for loop looked neater. they have the same physical plan anyway. The results ``` +-----+---+-----+-----+ | Name|val|coln0|coln1| +-----+---+-----+-----+ |Raghu| 1| Rag| hu| | Ravi| 2| Rav| i| |Rahul| 3| Rah| ul| +-----+---+-----+-----+ ```
62,675,999
For Example If I have a Column as given below by calling and showing the CSV in Pyspark ``` +--------+ | Names| +--------+ |Rahul | |Ravi | |Raghu | |Romeo | +--------+ ``` if I specify in my functions as Such Length = 2 Maxsplit = 3 Then I have to get the results as ``` +----------+-----------+----------+ |Col_1 |Col_2 |Col_3 | +----------+-----------+----------+ | Ra | hu | l | | Ra | vi | Null | | Ra | gh | u | | Ro | me | o | +----------+-----------+----------+ ``` Simirarly in Pyspark Length = 3 Max split = 2 it should provide me the output such as ``` +----------+-----------+ |Col_1 |Col_2 | +----------+-----------+ | Rah | ul | | Rav | i | | Rag | hu | | Rom | eo | +----------+-----------+ ``` This is how it should look like, Thank you
2020/07/01
[ "https://Stackoverflow.com/questions/62675999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11949957/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this ``` def split(data,length,maxSplit): start=1 for i in range(0,maxSplit): data = data.withColumn(f'col_{start}-{start+length-1}',f.substring('channel',start,length)) start=length+1 return data df = split(data,3,2) df.show() +--------+----+-------+-------+ | channel|type|col_1-3|col_4-6| +--------+----+-------+-------+ | web| 0| web| | | web| 1| web| | | web| 2| web| | | twitter| 0| twi| tte| | twitter| 1| twi| tte| |facebook| 0| fac| ebo| |facebook| 1| fac| ebo| |facebook| 2| fac| ebo| +--------+----+-------+-------+ ```
62,675,999
For Example If I have a Column as given below by calling and showing the CSV in Pyspark ``` +--------+ | Names| +--------+ |Rahul | |Ravi | |Raghu | |Romeo | +--------+ ``` if I specify in my functions as Such Length = 2 Maxsplit = 3 Then I have to get the results as ``` +----------+-----------+----------+ |Col_1 |Col_2 |Col_3 | +----------+-----------+----------+ | Ra | hu | l | | Ra | vi | Null | | Ra | gh | u | | Ro | me | o | +----------+-----------+----------+ ``` Simirarly in Pyspark Length = 3 Max split = 2 it should provide me the output such as ``` +----------+-----------+ |Col_1 |Col_2 | +----------+-----------+ | Rah | ul | | Rav | i | | Rag | hu | | Rom | eo | +----------+-----------+ ``` This is how it should look like, Thank you
2020/07/01
[ "https://Stackoverflow.com/questions/62675999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11949957/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Perhaps this is useful- Load the test data ------------------ > > Note: written in scala > > > ```scala val Length = 2 val Maxsplit = 3 val df = Seq("Rahul", "Ravi", "Raghu", "Romeo").toDF("Names") df.show(false) /** * +-----+ * |Names| * +-----+ * |Rahul| * |Ravi | * |Raghu| * |Romeo| * +-----+ */ ``` split the string col as per the length and offset ------------------------------------------------- ```scala val schema = StructType(Range(1, Maxsplit + 1).map(f => StructField(s"Col_$f", StringType))) val split = udf((str:String, length: Int, maxSplit: Int) =>{ val splits = str.toCharArray.grouped(length).map(_.mkString).toArray RowFactory.create(splits ++ Array.fill(maxSplit-splits.length)(null): _*) }, schema) val p = df .withColumn("x", split($"Names", lit(Length), lit(Maxsplit))) .selectExpr("x.*") p.show(false) p.printSchema() /** * +-----+-----+-----+ * |Col_1|Col_2|Col_3| * +-----+-----+-----+ * |Ra |hu |l | * |Ra |vi |null | * |Ra |gh |u | * |Ro |me |o | * +-----+-----+-----+ * * root * |-- Col_1: string (nullable = true) * |-- Col_2: string (nullable = true) * |-- Col_3: string (nullable = true) */ ``` `Dataset[Row]` -> `Dataset[Array[String]]` ------------------------------------------ ```scala val x = df.map(r => { val splits = r.getString(0).toCharArray.grouped(Length).map(_.mkString).toArray splits ++ Array.fill(Maxsplit-splits.length)(null) }) x.show(false) x.printSchema() /** * +-----------+ * |value | * +-----------+ * |[Ra, hu, l]| * |[Ra, vi,] | * |[Ra, gh, u]| * |[Ro, me, o]| * +-----------+ * * root * |-- value: array (nullable = true) * | |-- element: string (containsNull = true) */ ```
62,676,011
I am in need to use AADHAR API. Basically, we are developing a mobile app for our health care client and in that, we are doing patient registration. Now here we want to do simple AADHAR authentication using OTP request wherein the patient will enter his/her AADHAR number and in return gets an OTP which is submitted to AADHAR Auth API and once authentication is successful we will fetch eKYC details of a patient and use them in my app. Research has done so far: I have gone through AADHAR OTP Request API - <https://uidai.gov.in/images/resource/aadhaar_otp_request_api_2_5.pdf> AADHAR Auth API - <https://uidai.gov.in/images/resource/aadhaar_authentication_api_2_5.pdf> Basically I was trying to call both OTP Request and Auth API using postman but that did not work. While exploring an API it is mentioned that we need to pass `<Signature>Digital signature of AUA</Signature>` as a part of the request body, both in Auth and OTP request API as given in above links. Then we contacted <https://www.emudhradigital.com/> and obtained a test certificate in .pfx file format. Now I am stuck on how do I use this test certificate to call those APIs. Also, there is no clear documentation given on UIDAI website or anywhere like how to use and integrate them. I used postman while doing research but that does not work. Any hints or help will be appreciated. Thanks, Mahendra
2020/07/01
[ "https://Stackoverflow.com/questions/62676011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114775/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this, ``` import pyspark.sql.functions as F tst = sqlContext.createDataFrame([("Raghu",1),("Ravi",2),("Rahul",3)],schema=["Name","val"]) def fn (split,max_n,tst): for i in range(max_n): tst_loop=tst.withColumn("coln"+str(i),F.substring(F.col("Name"),(i*split)+1,split)) tst=tst_loop return(tst) tst_res = fn(3,2,tst) ``` The for loop can also replaced by a list comprehension or reduce, but i felt in you case, a for loop looked neater. they have the same physical plan anyway. The results ``` +-----+---+-----+-----+ | Name|val|coln0|coln1| +-----+---+-----+-----+ |Raghu| 1| Rag| hu| | Ravi| 2| Rav| i| |Rahul| 3| Rah| ul| +-----+---+-----+-----+ ```
62,676,011
I am in need to use AADHAR API. Basically, we are developing a mobile app for our health care client and in that, we are doing patient registration. Now here we want to do simple AADHAR authentication using OTP request wherein the patient will enter his/her AADHAR number and in return gets an OTP which is submitted to AADHAR Auth API and once authentication is successful we will fetch eKYC details of a patient and use them in my app. Research has done so far: I have gone through AADHAR OTP Request API - <https://uidai.gov.in/images/resource/aadhaar_otp_request_api_2_5.pdf> AADHAR Auth API - <https://uidai.gov.in/images/resource/aadhaar_authentication_api_2_5.pdf> Basically I was trying to call both OTP Request and Auth API using postman but that did not work. While exploring an API it is mentioned that we need to pass `<Signature>Digital signature of AUA</Signature>` as a part of the request body, both in Auth and OTP request API as given in above links. Then we contacted <https://www.emudhradigital.com/> and obtained a test certificate in .pfx file format. Now I am stuck on how do I use this test certificate to call those APIs. Also, there is no clear documentation given on UIDAI website or anywhere like how to use and integrate them. I used postman while doing research but that does not work. Any hints or help will be appreciated. Thanks, Mahendra
2020/07/01
[ "https://Stackoverflow.com/questions/62676011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114775/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this ``` def split(data,length,maxSplit): start=1 for i in range(0,maxSplit): data = data.withColumn(f'col_{start}-{start+length-1}',f.substring('channel',start,length)) start=length+1 return data df = split(data,3,2) df.show() +--------+----+-------+-------+ | channel|type|col_1-3|col_4-6| +--------+----+-------+-------+ | web| 0| web| | | web| 1| web| | | web| 2| web| | | twitter| 0| twi| tte| | twitter| 1| twi| tte| |facebook| 0| fac| ebo| |facebook| 1| fac| ebo| |facebook| 2| fac| ebo| +--------+----+-------+-------+ ```
62,676,011
I am in need to use AADHAR API. Basically, we are developing a mobile app for our health care client and in that, we are doing patient registration. Now here we want to do simple AADHAR authentication using OTP request wherein the patient will enter his/her AADHAR number and in return gets an OTP which is submitted to AADHAR Auth API and once authentication is successful we will fetch eKYC details of a patient and use them in my app. Research has done so far: I have gone through AADHAR OTP Request API - <https://uidai.gov.in/images/resource/aadhaar_otp_request_api_2_5.pdf> AADHAR Auth API - <https://uidai.gov.in/images/resource/aadhaar_authentication_api_2_5.pdf> Basically I was trying to call both OTP Request and Auth API using postman but that did not work. While exploring an API it is mentioned that we need to pass `<Signature>Digital signature of AUA</Signature>` as a part of the request body, both in Auth and OTP request API as given in above links. Then we contacted <https://www.emudhradigital.com/> and obtained a test certificate in .pfx file format. Now I am stuck on how do I use this test certificate to call those APIs. Also, there is no clear documentation given on UIDAI website or anywhere like how to use and integrate them. I used postman while doing research but that does not work. Any hints or help will be appreciated. Thanks, Mahendra
2020/07/01
[ "https://Stackoverflow.com/questions/62676011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114775/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Perhaps this is useful- Load the test data ------------------ > > Note: written in scala > > > ```scala val Length = 2 val Maxsplit = 3 val df = Seq("Rahul", "Ravi", "Raghu", "Romeo").toDF("Names") df.show(false) /** * +-----+ * |Names| * +-----+ * |Rahul| * |Ravi | * |Raghu| * |Romeo| * +-----+ */ ``` split the string col as per the length and offset ------------------------------------------------- ```scala val schema = StructType(Range(1, Maxsplit + 1).map(f => StructField(s"Col_$f", StringType))) val split = udf((str:String, length: Int, maxSplit: Int) =>{ val splits = str.toCharArray.grouped(length).map(_.mkString).toArray RowFactory.create(splits ++ Array.fill(maxSplit-splits.length)(null): _*) }, schema) val p = df .withColumn("x", split($"Names", lit(Length), lit(Maxsplit))) .selectExpr("x.*") p.show(false) p.printSchema() /** * +-----+-----+-----+ * |Col_1|Col_2|Col_3| * +-----+-----+-----+ * |Ra |hu |l | * |Ra |vi |null | * |Ra |gh |u | * |Ro |me |o | * +-----+-----+-----+ * * root * |-- Col_1: string (nullable = true) * |-- Col_2: string (nullable = true) * |-- Col_3: string (nullable = true) */ ``` `Dataset[Row]` -> `Dataset[Array[String]]` ------------------------------------------ ```scala val x = df.map(r => { val splits = r.getString(0).toCharArray.grouped(Length).map(_.mkString).toArray splits ++ Array.fill(Maxsplit-splits.length)(null) }) x.show(false) x.printSchema() /** * +-----------+ * |value | * +-----------+ * |[Ra, hu, l]| * |[Ra, vi,] | * |[Ra, gh, u]| * |[Ro, me, o]| * +-----------+ * * root * |-- value: array (nullable = true) * | |-- element: string (containsNull = true) */ ```
62,676,013
I wish I could parse torrent files automatically via R. I tried to use [R-bencode](https://github.com/vspinu/R-bencode) package: ``` library('bencode') test_torrent <- readLines('/home/user/Downloads/some_file.torrent', encoding = "UTF-8") decoded_torrent <- bencode::bdecode(test_torrent) ``` but faced to error: ``` Error in bencode::bdecode(test_torrent) : input string terminated unexpectedly ``` In addition if I try to parse just part of this file `bdecode('\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf')`, I get ``` Error in bdecode("\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf") : Wrong encoding '�'. Allowed values are i, l, d or a digit. ``` Maybe there are another ways to do it in R? Or probably I can insert another language code in Rscript? Thanks in advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62676013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12594996/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this, ``` import pyspark.sql.functions as F tst = sqlContext.createDataFrame([("Raghu",1),("Ravi",2),("Rahul",3)],schema=["Name","val"]) def fn (split,max_n,tst): for i in range(max_n): tst_loop=tst.withColumn("coln"+str(i),F.substring(F.col("Name"),(i*split)+1,split)) tst=tst_loop return(tst) tst_res = fn(3,2,tst) ``` The for loop can also replaced by a list comprehension or reduce, but i felt in you case, a for loop looked neater. they have the same physical plan anyway. The results ``` +-----+---+-----+-----+ | Name|val|coln0|coln1| +-----+---+-----+-----+ |Raghu| 1| Rag| hu| | Ravi| 2| Rav| i| |Rahul| 3| Rah| ul| +-----+---+-----+-----+ ```
62,676,013
I wish I could parse torrent files automatically via R. I tried to use [R-bencode](https://github.com/vspinu/R-bencode) package: ``` library('bencode') test_torrent <- readLines('/home/user/Downloads/some_file.torrent', encoding = "UTF-8") decoded_torrent <- bencode::bdecode(test_torrent) ``` but faced to error: ``` Error in bencode::bdecode(test_torrent) : input string terminated unexpectedly ``` In addition if I try to parse just part of this file `bdecode('\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf')`, I get ``` Error in bdecode("\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf") : Wrong encoding '�'. Allowed values are i, l, d or a digit. ``` Maybe there are another ways to do it in R? Or probably I can insert another language code in Rscript? Thanks in advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62676013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12594996/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Try this ``` def split(data,length,maxSplit): start=1 for i in range(0,maxSplit): data = data.withColumn(f'col_{start}-{start+length-1}',f.substring('channel',start,length)) start=length+1 return data df = split(data,3,2) df.show() +--------+----+-------+-------+ | channel|type|col_1-3|col_4-6| +--------+----+-------+-------+ | web| 0| web| | | web| 1| web| | | web| 2| web| | | twitter| 0| twi| tte| | twitter| 1| twi| tte| |facebook| 0| fac| ebo| |facebook| 1| fac| ebo| |facebook| 2| fac| ebo| +--------+----+-------+-------+ ```
62,676,013
I wish I could parse torrent files automatically via R. I tried to use [R-bencode](https://github.com/vspinu/R-bencode) package: ``` library('bencode') test_torrent <- readLines('/home/user/Downloads/some_file.torrent', encoding = "UTF-8") decoded_torrent <- bencode::bdecode(test_torrent) ``` but faced to error: ``` Error in bencode::bdecode(test_torrent) : input string terminated unexpectedly ``` In addition if I try to parse just part of this file `bdecode('\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf')`, I get ``` Error in bdecode("\xe7\xc9\xe0\b\xfbD-\xd8\xd6(\xe2\004>\x9c\xda\005Zar\x8c\xdfV\x88\022t\xe4գi]\xcf") : Wrong encoding '�'. Allowed values are i, l, d or a digit. ``` Maybe there are another ways to do it in R? Or probably I can insert another language code in Rscript? Thanks in advance!
2020/07/01
[ "https://Stackoverflow.com/questions/62676013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12594996/" ]
Another way to go about this. Should be faster than any looping or udf solution. ``` from pyspark.sql import functions as F def split(df,length,maxsplit): return df.withColumn('Names',F.split("Names","(?<=\\G{})".format('.'*length)))\ .select(*((F.col("Names")[x]).alias("Col_"+str(x+1)) for x in range(0,maxsplit))) split(df,3,2).show() #+-----+-----+ #|Col_1|Col_2| #+-----+-----+ #| Rah| ul| #| Rav| i| #| Rag| hu| #| Rom| eo| #+-----+-----+ split(df,2,3).show() #+-----+-----+-----+ #|col_1|col_2|col_3| #+-----+-----+-----+ #| Ra| hu| l| #| Ra| vi| | #| Ra| gh| u| #| Ro| me| o| #+-----+-----+-----+ ```
Perhaps this is useful- Load the test data ------------------ > > Note: written in scala > > > ```scala val Length = 2 val Maxsplit = 3 val df = Seq("Rahul", "Ravi", "Raghu", "Romeo").toDF("Names") df.show(false) /** * +-----+ * |Names| * +-----+ * |Rahul| * |Ravi | * |Raghu| * |Romeo| * +-----+ */ ``` split the string col as per the length and offset ------------------------------------------------- ```scala val schema = StructType(Range(1, Maxsplit + 1).map(f => StructField(s"Col_$f", StringType))) val split = udf((str:String, length: Int, maxSplit: Int) =>{ val splits = str.toCharArray.grouped(length).map(_.mkString).toArray RowFactory.create(splits ++ Array.fill(maxSplit-splits.length)(null): _*) }, schema) val p = df .withColumn("x", split($"Names", lit(Length), lit(Maxsplit))) .selectExpr("x.*") p.show(false) p.printSchema() /** * +-----+-----+-----+ * |Col_1|Col_2|Col_3| * +-----+-----+-----+ * |Ra |hu |l | * |Ra |vi |null | * |Ra |gh |u | * |Ro |me |o | * +-----+-----+-----+ * * root * |-- Col_1: string (nullable = true) * |-- Col_2: string (nullable = true) * |-- Col_3: string (nullable = true) */ ``` `Dataset[Row]` -> `Dataset[Array[String]]` ------------------------------------------ ```scala val x = df.map(r => { val splits = r.getString(0).toCharArray.grouped(Length).map(_.mkString).toArray splits ++ Array.fill(Maxsplit-splits.length)(null) }) x.show(false) x.printSchema() /** * +-----------+ * |value | * +-----------+ * |[Ra, hu, l]| * |[Ra, vi,] | * |[Ra, gh, u]| * |[Ro, me, o]| * +-----------+ * * root * |-- value: array (nullable = true) * | |-- element: string (containsNull = true) */ ```
62,676,053
I am developing multi-user conference and it is almost done. Users are to register to LiveSwitch gateway and create MCU(Multiple Control Unit) connection, and open or close it when it needs. Several users can do this. Now, I am going to make a race for these users to connect or disconnect to LiveSwitch gateway. Total number of users could be registered is limited. So users who are trying to connect can race like horses. I have tried this way. `var connection = channel.createMcuConnection(audioStream, videoStream); connection.open(); connection.close();` I thought I could do it using these functions, but I don't know how I can do it. If there is someone who has experience in this or has some examples, please let me know. Thank you.
2020/07/01
[ "https://Stackoverflow.com/questions/62676053", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13824353/" ]
It can be resolved by using a kind of signal system together with the liveswitch MCU server. Basically, we would be able to set the maximum number of clients and it should be managed by a kind of signaling server. And then on liveswitch sdk(client's side), we can detect one of clients based on low-bandwidth to unregister the client itself.
It's kind of race and I have found that it could not be solved because I have only promise from its process. Retrieval time could be undetermined on front end side and it should be maintained on the server side.
62,676,055
``` <form id="pattern-form"> <input type="search" name="search-input" pattern=".*[^2].*"> <button formaction="lava.php">Submit</button> </form> ``` The pattern is not working. * `.*` = any character any amount. * `[^2]` = the digit 2 is not allowed. * `.*` = any character any amount. What should happen is that you should be able to type any amount of any character as long as the digit 2 is not present. But that's not happening. I can submit any character any amount including the digit 2. I can type this `"gfgdgdg2"` and it gets submitted. Why? Or i can type this: `"2gfgfg"` and it gets submitted. Or `"gdgdg 2 gdgdg"` and again it gets submitted. Why? If my pattern is `[^2].*` The way this works is if the first letter is digit 2, then it doesn't get submitted. But if you first type something, and then end with `"2"` then it gets submitted, which is understandable, that's why I've used any character any amount twice, at the front and back of the pattern. So why is it not working?
2020/07/01
[ "https://Stackoverflow.com/questions/62676055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try this regex: ```html <form id="pattern-form"> <input type="search" name="search-input" pattern="[\D013-9]*"> <button formaction="lava.php">Submit</button> </form> ```
`.*[^2].*` matches any string that contains at least one character that is not `2`. It will not match a string like `22222`. It will match `12222` because `1` is not `2`. Use `[^2]*` to match any characters other than `2` zero or more times: ```css input:valid { color: #000000; } input:invalid { color: #FF0000; } ``` ```html <form id="pattern-form"> <input type="search" name="search-input" pattern="[^2]*"> <button formaction="lava.php">Submit</button> </form> ``` The HTML pattern always requires entire string to match, you do not need `^` and `$` anchors, but you still can use them, `pattern="^[^2]*$"` will work, too.
62,676,056
How can I achieve this in react native? [![enter image description here](https://i.stack.imgur.com/prOWT.jpg)](https://i.stack.imgur.com/prOWT.jpg) So far I have this and I want to implement the middle curve. I don't know to either handle it with a transparent view or switch to SVG completely [![enter image description here](https://i.stack.imgur.com/SOrWr.jpg)](https://i.stack.imgur.com/SOrWr.jpg) and this the `tabBar` component ```js /* eslint-disable react/prop-types */ import React, { Component } from 'react' import { TouchableOpacity, Text, StyleSheet, View } from 'react-native' import { Colors } from 'App/Theme' export default class TabBar extends Component { render() { let { renderIcon, getLabelText, activeTintColor, inactiveTintColor, onTabPress, onTabLongPress, getAccessibilityLabel, navigation, showLabel, } = this.props let { routes, index: activeRouteIndex } = navigation.state return ( <View style={styles.tabBar}> {routes.map((route, routeIndex) => { let isRouteActive = routeIndex === activeRouteIndex let tintColor = isRouteActive ? activeTintColor : inactiveTintColor return ( <TouchableOpacity key={routeIndex} style={styles.tab} onPress={() => { onTabPress({ route }) }} onLongPress={() => { onTabLongPress({ route }) }} accessibilityLabel={getAccessibilityLabel({ route })} > {renderIcon({ route, focused: isRouteActive, tintColor })} {showLabel ? <Text>{getLabelText({ route })}</Text> : null} </TouchableOpacity> ) })} </View> ) } } const styles = StyleSheet.create({ tab: { alignItems: 'center', flex: 1, justifyContent: 'center', }, tabBar: { alignSelf: 'center', backgroundColor: Colors.primary, borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, infinity: { width: 80, height: 100, }, infinityBefore: { position: 'absolute', top: 0, left: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, borderBottomLeftRadius: 0, transform: [{ rotate: '-135deg' }], }, infinityAfter: { position: 'absolute', top: 0, right: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 0, borderBottomRightRadius: 50, borderBottomLeftRadius: 50, transform: [{ rotate: '-135deg' }], }, }) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11187413/" ]
here is a demo: <https://snack.expo.io/@nomi9995/cf371e> you need to use [react-native-svg](https://www.npmjs.com/package/react-native-svg) ``` yarn add react-native-svg ``` ``` import React, { Component } from "react"; import { Text, StyleSheet, View, Dimensions, TouchableHighlight, } from "react-native"; import Svg, { Circle, Path } from "react-native-svg"; const tabs = [1, 2, 3, 4, 5]; export default class App extends Component { constructor(props) { super(props); this.state = { pathX: "357", pathY: "675", pathA: "689", pathB: "706", }; } render() { return ( <View style={[styles.container]}> <View style={[styles.content]}> <View style={styles.subContent}> {tabs.map((_tabs, i) => { return ( <TouchableHighlight key={i} underlayColor={"transparent"} onPress={() => console.log("onPress")} > <View> </View> </TouchableHighlight> ); })} </View> <Svg version="1.1" id="bottom-bar" x="0px" y="0px" width="100%" height="100" viewBox="0 0 1092 260" space="preserve" > <Path fill={"#373A50"} stroke={"#373A50"} d={`M30,60h${this.state.pathX}.3c17.2,0,31,14.4,30,31.6c-0.2,2.7-0.3,5.5-0.3,8.2c0,71.2,58.1,129.6,129.4,130c72.1,0.3,130.6-58,130.6-130c0-2.7-0.1-5.4-0.2-8.1C${this.state.pathY}.7,74.5,${this.state.pathA}.5,60,${this.state.pathB}.7,60H1062c16.6,0,30,13.4,30,30v94c0,42-34,76-76,76H76c-42,0-76-34-76-76V90C0,73.4,13.4,60,30,60z`} /> <Circle fill={"#7EE6D2"} stroke={"#7EE6D2"} cx="546" cy="100" r="100" /> </Svg> </View> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, overflow: "hidden", }, content: { flexDirection: "column", zIndex: 0, width: Dimensions.get("window").width - 30, marginBottom: "4%", left: "4%", right: "4%", position: "absolute", bottom: "1%", }, subContent: { flexDirection: "row", marginLeft: 15, marginRight: 15, marginBottom: 10, zIndex: 1, position: "absolute", bottom: 5, } }); ``` **i hope this will help you.** [![enter image description here](https://i.stack.imgur.com/UVmkml.png)](https://i.stack.imgur.com/UVmkml.png)
It's not obvious that this can be done with only `<View/>` components. I would split the TabBar into a flex row container with three subviews, and create an SVG with the filled inverted radius to be used in the center subview. To render the SVG, use [react-native-svg](https://github.com/react-native-community/react-native-svg). See a rough layout below: ``` ... import { SvgXml } from 'react-native-svg'; import TabCenterSvg from ‘assets/my-svg.svg’ export default class TabBar extends Component { render() { return ( <View style={styles.tabBar}> <View style={styles.leftContainer}> {/* Left Buttons */} </View> <View style={styles.centerContainer}> <View style={styles.centerInnerTopContainer}> {/* Add Button */} </View> <View style={styles.centerInnerBottomContainer}> <SvgXml xml={TabCenterSvg} /> </View> </View> <View style={styles.rightContainer}> {/* Right Icons */} </View> </View> ) } } const styles = StyleSheet.create({ tabBar: { alignSelf: 'center', borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, leftContainer: { flex: 1, flexDirection: 'row', borderBottomLeftRadius: 50, borderTopLeftRadius: 50, borderTopRightRadius: 50, backgroundColor: Colors.primary, }, centerContainer: { flex: 1, flexDirection: 'column', }, centerInnerTopContainer: { flex: 1, }, centerInnerBottomContainer: { flex: 1, }, rightContainer: { flex: 1, flexDirection: 'row', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, backgroundColor: Colors.primary, }, }) ```
62,676,056
How can I achieve this in react native? [![enter image description here](https://i.stack.imgur.com/prOWT.jpg)](https://i.stack.imgur.com/prOWT.jpg) So far I have this and I want to implement the middle curve. I don't know to either handle it with a transparent view or switch to SVG completely [![enter image description here](https://i.stack.imgur.com/SOrWr.jpg)](https://i.stack.imgur.com/SOrWr.jpg) and this the `tabBar` component ```js /* eslint-disable react/prop-types */ import React, { Component } from 'react' import { TouchableOpacity, Text, StyleSheet, View } from 'react-native' import { Colors } from 'App/Theme' export default class TabBar extends Component { render() { let { renderIcon, getLabelText, activeTintColor, inactiveTintColor, onTabPress, onTabLongPress, getAccessibilityLabel, navigation, showLabel, } = this.props let { routes, index: activeRouteIndex } = navigation.state return ( <View style={styles.tabBar}> {routes.map((route, routeIndex) => { let isRouteActive = routeIndex === activeRouteIndex let tintColor = isRouteActive ? activeTintColor : inactiveTintColor return ( <TouchableOpacity key={routeIndex} style={styles.tab} onPress={() => { onTabPress({ route }) }} onLongPress={() => { onTabLongPress({ route }) }} accessibilityLabel={getAccessibilityLabel({ route })} > {renderIcon({ route, focused: isRouteActive, tintColor })} {showLabel ? <Text>{getLabelText({ route })}</Text> : null} </TouchableOpacity> ) })} </View> ) } } const styles = StyleSheet.create({ tab: { alignItems: 'center', flex: 1, justifyContent: 'center', }, tabBar: { alignSelf: 'center', backgroundColor: Colors.primary, borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, infinity: { width: 80, height: 100, }, infinityBefore: { position: 'absolute', top: 0, left: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, borderBottomLeftRadius: 0, transform: [{ rotate: '-135deg' }], }, infinityAfter: { position: 'absolute', top: 0, right: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 0, borderBottomRightRadius: 50, borderBottomLeftRadius: 50, transform: [{ rotate: '-135deg' }], }, }) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11187413/" ]
Here is 2 solution according to your requirement. If you want this type of design without selection then this code will help you : <https://github.com/alex-melnyk/clipped-tabbar> And if you need on each tab selection then here is other easy library for you : <https://github.com/Jm-Zion/rn-wave-bottom-bar>
It's not obvious that this can be done with only `<View/>` components. I would split the TabBar into a flex row container with three subviews, and create an SVG with the filled inverted radius to be used in the center subview. To render the SVG, use [react-native-svg](https://github.com/react-native-community/react-native-svg). See a rough layout below: ``` ... import { SvgXml } from 'react-native-svg'; import TabCenterSvg from ‘assets/my-svg.svg’ export default class TabBar extends Component { render() { return ( <View style={styles.tabBar}> <View style={styles.leftContainer}> {/* Left Buttons */} </View> <View style={styles.centerContainer}> <View style={styles.centerInnerTopContainer}> {/* Add Button */} </View> <View style={styles.centerInnerBottomContainer}> <SvgXml xml={TabCenterSvg} /> </View> </View> <View style={styles.rightContainer}> {/* Right Icons */} </View> </View> ) } } const styles = StyleSheet.create({ tabBar: { alignSelf: 'center', borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, leftContainer: { flex: 1, flexDirection: 'row', borderBottomLeftRadius: 50, borderTopLeftRadius: 50, borderTopRightRadius: 50, backgroundColor: Colors.primary, }, centerContainer: { flex: 1, flexDirection: 'column', }, centerInnerTopContainer: { flex: 1, }, centerInnerBottomContainer: { flex: 1, }, rightContainer: { flex: 1, flexDirection: 'row', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, backgroundColor: Colors.primary, }, }) ```
62,676,056
How can I achieve this in react native? [![enter image description here](https://i.stack.imgur.com/prOWT.jpg)](https://i.stack.imgur.com/prOWT.jpg) So far I have this and I want to implement the middle curve. I don't know to either handle it with a transparent view or switch to SVG completely [![enter image description here](https://i.stack.imgur.com/SOrWr.jpg)](https://i.stack.imgur.com/SOrWr.jpg) and this the `tabBar` component ```js /* eslint-disable react/prop-types */ import React, { Component } from 'react' import { TouchableOpacity, Text, StyleSheet, View } from 'react-native' import { Colors } from 'App/Theme' export default class TabBar extends Component { render() { let { renderIcon, getLabelText, activeTintColor, inactiveTintColor, onTabPress, onTabLongPress, getAccessibilityLabel, navigation, showLabel, } = this.props let { routes, index: activeRouteIndex } = navigation.state return ( <View style={styles.tabBar}> {routes.map((route, routeIndex) => { let isRouteActive = routeIndex === activeRouteIndex let tintColor = isRouteActive ? activeTintColor : inactiveTintColor return ( <TouchableOpacity key={routeIndex} style={styles.tab} onPress={() => { onTabPress({ route }) }} onLongPress={() => { onTabLongPress({ route }) }} accessibilityLabel={getAccessibilityLabel({ route })} > {renderIcon({ route, focused: isRouteActive, tintColor })} {showLabel ? <Text>{getLabelText({ route })}</Text> : null} </TouchableOpacity> ) })} </View> ) } } const styles = StyleSheet.create({ tab: { alignItems: 'center', flex: 1, justifyContent: 'center', }, tabBar: { alignSelf: 'center', backgroundColor: Colors.primary, borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, infinity: { width: 80, height: 100, }, infinityBefore: { position: 'absolute', top: 0, left: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, borderBottomLeftRadius: 0, transform: [{ rotate: '-135deg' }], }, infinityAfter: { position: 'absolute', top: 0, right: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 0, borderBottomRightRadius: 50, borderBottomLeftRadius: 50, transform: [{ rotate: '-135deg' }], }, }) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11187413/" ]
here is a demo: <https://snack.expo.io/@nomi9995/cf371e> you need to use [react-native-svg](https://www.npmjs.com/package/react-native-svg) ``` yarn add react-native-svg ``` ``` import React, { Component } from "react"; import { Text, StyleSheet, View, Dimensions, TouchableHighlight, } from "react-native"; import Svg, { Circle, Path } from "react-native-svg"; const tabs = [1, 2, 3, 4, 5]; export default class App extends Component { constructor(props) { super(props); this.state = { pathX: "357", pathY: "675", pathA: "689", pathB: "706", }; } render() { return ( <View style={[styles.container]}> <View style={[styles.content]}> <View style={styles.subContent}> {tabs.map((_tabs, i) => { return ( <TouchableHighlight key={i} underlayColor={"transparent"} onPress={() => console.log("onPress")} > <View> </View> </TouchableHighlight> ); })} </View> <Svg version="1.1" id="bottom-bar" x="0px" y="0px" width="100%" height="100" viewBox="0 0 1092 260" space="preserve" > <Path fill={"#373A50"} stroke={"#373A50"} d={`M30,60h${this.state.pathX}.3c17.2,0,31,14.4,30,31.6c-0.2,2.7-0.3,5.5-0.3,8.2c0,71.2,58.1,129.6,129.4,130c72.1,0.3,130.6-58,130.6-130c0-2.7-0.1-5.4-0.2-8.1C${this.state.pathY}.7,74.5,${this.state.pathA}.5,60,${this.state.pathB}.7,60H1062c16.6,0,30,13.4,30,30v94c0,42-34,76-76,76H76c-42,0-76-34-76-76V90C0,73.4,13.4,60,30,60z`} /> <Circle fill={"#7EE6D2"} stroke={"#7EE6D2"} cx="546" cy="100" r="100" /> </Svg> </View> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, overflow: "hidden", }, content: { flexDirection: "column", zIndex: 0, width: Dimensions.get("window").width - 30, marginBottom: "4%", left: "4%", right: "4%", position: "absolute", bottom: "1%", }, subContent: { flexDirection: "row", marginLeft: 15, marginRight: 15, marginBottom: 10, zIndex: 1, position: "absolute", bottom: 5, } }); ``` **i hope this will help you.** [![enter image description here](https://i.stack.imgur.com/UVmkml.png)](https://i.stack.imgur.com/UVmkml.png)
Use this library's code and customize according to your UI <https://www.npmjs.com/package/curved-bottom-navigation-bar> Note: I'll not recommend this library as there are low weekly downloads. Rather than using the whole library, you can use its code.
62,676,056
How can I achieve this in react native? [![enter image description here](https://i.stack.imgur.com/prOWT.jpg)](https://i.stack.imgur.com/prOWT.jpg) So far I have this and I want to implement the middle curve. I don't know to either handle it with a transparent view or switch to SVG completely [![enter image description here](https://i.stack.imgur.com/SOrWr.jpg)](https://i.stack.imgur.com/SOrWr.jpg) and this the `tabBar` component ```js /* eslint-disable react/prop-types */ import React, { Component } from 'react' import { TouchableOpacity, Text, StyleSheet, View } from 'react-native' import { Colors } from 'App/Theme' export default class TabBar extends Component { render() { let { renderIcon, getLabelText, activeTintColor, inactiveTintColor, onTabPress, onTabLongPress, getAccessibilityLabel, navigation, showLabel, } = this.props let { routes, index: activeRouteIndex } = navigation.state return ( <View style={styles.tabBar}> {routes.map((route, routeIndex) => { let isRouteActive = routeIndex === activeRouteIndex let tintColor = isRouteActive ? activeTintColor : inactiveTintColor return ( <TouchableOpacity key={routeIndex} style={styles.tab} onPress={() => { onTabPress({ route }) }} onLongPress={() => { onTabLongPress({ route }) }} accessibilityLabel={getAccessibilityLabel({ route })} > {renderIcon({ route, focused: isRouteActive, tintColor })} {showLabel ? <Text>{getLabelText({ route })}</Text> : null} </TouchableOpacity> ) })} </View> ) } } const styles = StyleSheet.create({ tab: { alignItems: 'center', flex: 1, justifyContent: 'center', }, tabBar: { alignSelf: 'center', backgroundColor: Colors.primary, borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, infinity: { width: 80, height: 100, }, infinityBefore: { position: 'absolute', top: 0, left: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, borderBottomLeftRadius: 0, transform: [{ rotate: '-135deg' }], }, infinityAfter: { position: 'absolute', top: 0, right: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 0, borderBottomRightRadius: 50, borderBottomLeftRadius: 50, transform: [{ rotate: '-135deg' }], }, }) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11187413/" ]
Here is 2 solution according to your requirement. If you want this type of design without selection then this code will help you : <https://github.com/alex-melnyk/clipped-tabbar> And if you need on each tab selection then here is other easy library for you : <https://github.com/Jm-Zion/rn-wave-bottom-bar>
Use this library's code and customize according to your UI <https://www.npmjs.com/package/curved-bottom-navigation-bar> Note: I'll not recommend this library as there are low weekly downloads. Rather than using the whole library, you can use its code.
62,676,056
How can I achieve this in react native? [![enter image description here](https://i.stack.imgur.com/prOWT.jpg)](https://i.stack.imgur.com/prOWT.jpg) So far I have this and I want to implement the middle curve. I don't know to either handle it with a transparent view or switch to SVG completely [![enter image description here](https://i.stack.imgur.com/SOrWr.jpg)](https://i.stack.imgur.com/SOrWr.jpg) and this the `tabBar` component ```js /* eslint-disable react/prop-types */ import React, { Component } from 'react' import { TouchableOpacity, Text, StyleSheet, View } from 'react-native' import { Colors } from 'App/Theme' export default class TabBar extends Component { render() { let { renderIcon, getLabelText, activeTintColor, inactiveTintColor, onTabPress, onTabLongPress, getAccessibilityLabel, navigation, showLabel, } = this.props let { routes, index: activeRouteIndex } = navigation.state return ( <View style={styles.tabBar}> {routes.map((route, routeIndex) => { let isRouteActive = routeIndex === activeRouteIndex let tintColor = isRouteActive ? activeTintColor : inactiveTintColor return ( <TouchableOpacity key={routeIndex} style={styles.tab} onPress={() => { onTabPress({ route }) }} onLongPress={() => { onTabLongPress({ route }) }} accessibilityLabel={getAccessibilityLabel({ route })} > {renderIcon({ route, focused: isRouteActive, tintColor })} {showLabel ? <Text>{getLabelText({ route })}</Text> : null} </TouchableOpacity> ) })} </View> ) } } const styles = StyleSheet.create({ tab: { alignItems: 'center', flex: 1, justifyContent: 'center', }, tabBar: { alignSelf: 'center', backgroundColor: Colors.primary, borderRadius: 50, bottom: 10, elevation: 2, flexDirection: 'row', height: 65, position: 'absolute', width: '95%', }, infinity: { width: 80, height: 100, }, infinityBefore: { position: 'absolute', top: 0, left: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 50, borderBottomRightRadius: 50, borderBottomLeftRadius: 0, transform: [{ rotate: '-135deg' }], }, infinityAfter: { position: 'absolute', top: 0, right: 0, width: 0, height: 0, borderWidth: 20, borderColor: 'red', borderStyle: 'solid', borderTopLeftRadius: 50, borderTopRightRadius: 0, borderBottomRightRadius: 50, borderBottomLeftRadius: 50, transform: [{ rotate: '-135deg' }], }, }) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11187413/" ]
here is a demo: <https://snack.expo.io/@nomi9995/cf371e> you need to use [react-native-svg](https://www.npmjs.com/package/react-native-svg) ``` yarn add react-native-svg ``` ``` import React, { Component } from "react"; import { Text, StyleSheet, View, Dimensions, TouchableHighlight, } from "react-native"; import Svg, { Circle, Path } from "react-native-svg"; const tabs = [1, 2, 3, 4, 5]; export default class App extends Component { constructor(props) { super(props); this.state = { pathX: "357", pathY: "675", pathA: "689", pathB: "706", }; } render() { return ( <View style={[styles.container]}> <View style={[styles.content]}> <View style={styles.subContent}> {tabs.map((_tabs, i) => { return ( <TouchableHighlight key={i} underlayColor={"transparent"} onPress={() => console.log("onPress")} > <View> </View> </TouchableHighlight> ); })} </View> <Svg version="1.1" id="bottom-bar" x="0px" y="0px" width="100%" height="100" viewBox="0 0 1092 260" space="preserve" > <Path fill={"#373A50"} stroke={"#373A50"} d={`M30,60h${this.state.pathX}.3c17.2,0,31,14.4,30,31.6c-0.2,2.7-0.3,5.5-0.3,8.2c0,71.2,58.1,129.6,129.4,130c72.1,0.3,130.6-58,130.6-130c0-2.7-0.1-5.4-0.2-8.1C${this.state.pathY}.7,74.5,${this.state.pathA}.5,60,${this.state.pathB}.7,60H1062c16.6,0,30,13.4,30,30v94c0,42-34,76-76,76H76c-42,0-76-34-76-76V90C0,73.4,13.4,60,30,60z`} /> <Circle fill={"#7EE6D2"} stroke={"#7EE6D2"} cx="546" cy="100" r="100" /> </Svg> </View> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, overflow: "hidden", }, content: { flexDirection: "column", zIndex: 0, width: Dimensions.get("window").width - 30, marginBottom: "4%", left: "4%", right: "4%", position: "absolute", bottom: "1%", }, subContent: { flexDirection: "row", marginLeft: 15, marginRight: 15, marginBottom: 10, zIndex: 1, position: "absolute", bottom: 5, } }); ``` **i hope this will help you.** [![enter image description here](https://i.stack.imgur.com/UVmkml.png)](https://i.stack.imgur.com/UVmkml.png)
Here is 2 solution according to your requirement. If you want this type of design without selection then this code will help you : <https://github.com/alex-melnyk/clipped-tabbar> And if you need on each tab selection then here is other easy library for you : <https://github.com/Jm-Zion/rn-wave-bottom-bar>
62,676,062
I have this in my `application_controller` ``` class ApplicationController < ActionController::Base before_action :login_required, :only => 'users/login' protect_from_forgery with: :exception protected def login_required return true if User.find_by_id(session[:user_id]) access_denied return false end def access_denied flash[:error] = 'Oops. You need to login before you can view that page.' redirect_to users_login_path end end ``` I want to use the **login\_required** for each controller def method Is there a better way instead of this? ``` class UsersController < ApplicationController before_action :set_user, :login_required, :only => 'users/login' #before_action only: [:show, :edit, :update, :destroy, :new] def index login_required @users = User.all end def new login_required @user = User.new end end ``` Is there a better way to include `login_required` for all controllers methods since `before_action` doesn't seem to work?
2020/07/01
[ "https://Stackoverflow.com/questions/62676062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12265752/" ]
I don't know the motivation of your logic, so I'll just focus on how you can solve this particular problem. You can do something like this: In your application controller: ```rb class ApplicationController < ActionController::Base before_action :login_required private def login_required current_params = params["controller"] + "/" + params["action"] if current_params == "users/new" or current_params == "users/index" return true if User.find(session[:user_id]) access_denied return false end end def access_denied flash[:error] = 'Oops. You need to login before you can view that page.' redirect_to users_login_path end end ``` The `login_required` method will just run only on `users` controller's `index` and `new` action, for the rest, it'll just ignore. Also you can just use `User.find()` and no need to use `User.find_by_id()` Now, in your `users_controller.rb`, you don't need to mention anything about `login_required`, everything will happen already in `application_controller` before coming here. ```rb class UsersController < ApplicationController before_action :set_user, :only => 'users/login' #before_action only: [:show, :edit, :update, :destroy, :new] def index @users = User.all end def new @user = User.new end end ```
Firstly, I'm going to suggest that you use [devise](https://github.com/heartcombo/devise) for authentication, it's a lot more secure and should deal with this for you. As for your problem, you should be able to specify the before\_action like this: `before_action :set_user, :login_required, only: [:new]` Which you can put in your UserController. However if you want this globally, just put it in the ApplicationController, without the `only:` key.
62,676,062
I have this in my `application_controller` ``` class ApplicationController < ActionController::Base before_action :login_required, :only => 'users/login' protect_from_forgery with: :exception protected def login_required return true if User.find_by_id(session[:user_id]) access_denied return false end def access_denied flash[:error] = 'Oops. You need to login before you can view that page.' redirect_to users_login_path end end ``` I want to use the **login\_required** for each controller def method Is there a better way instead of this? ``` class UsersController < ApplicationController before_action :set_user, :login_required, :only => 'users/login' #before_action only: [:show, :edit, :update, :destroy, :new] def index login_required @users = User.all end def new login_required @user = User.new end end ``` Is there a better way to include `login_required` for all controllers methods since `before_action` doesn't seem to work?
2020/07/01
[ "https://Stackoverflow.com/questions/62676062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12265752/" ]
I don't know the motivation of your logic, so I'll just focus on how you can solve this particular problem. You can do something like this: In your application controller: ```rb class ApplicationController < ActionController::Base before_action :login_required private def login_required current_params = params["controller"] + "/" + params["action"] if current_params == "users/new" or current_params == "users/index" return true if User.find(session[:user_id]) access_denied return false end end def access_denied flash[:error] = 'Oops. You need to login before you can view that page.' redirect_to users_login_path end end ``` The `login_required` method will just run only on `users` controller's `index` and `new` action, for the rest, it'll just ignore. Also you can just use `User.find()` and no need to use `User.find_by_id()` Now, in your `users_controller.rb`, you don't need to mention anything about `login_required`, everything will happen already in `application_controller` before coming here. ```rb class UsersController < ApplicationController before_action :set_user, :only => 'users/login' #before_action only: [:show, :edit, :update, :destroy, :new] def index @users = User.all end def new @user = User.new end end ```
If you want to require login for all pages except `/users/login`, then you almost have it right except you are specifying `only:` when you should be using `except:`: ``` class ApplicationController < ActionController::Base before_action :login_required, except: 'users/login' ... end ``` This configuration will be applied to all sub-classes of `ApplicationController` as well.
62,676,069
I have two databases - old one and update one. Both have same structures, with unique ID. If record changes - there's new record with same ID and new data. So after `rbind(m1,m2)` I have duplicated records. I can't just remove duplicated ID's, since the data could be updated. There's no way to tell the difference which record is new, beside it being in old file or update file. How can I merge two tables, and if there's row with duplicated ID, leave the one from newer file? I know I could add column to both and just `ifelse()` this, but I'm looking for something more elegant, preferably oneliner.
2020/07/01
[ "https://Stackoverflow.com/questions/62676069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10908320/" ]
hard to give the correct answer without sample data.. but here is an approach that you can adjust to your data.. ``` #sample data library( data.table ) dt1 <- data.table( id = 2:3, value = c(2,4)) dt2 <- data.table( id = 1:2, value = c(2,6)) #dt1 # id value # 1: 2 2 # 2: 3 4 #dt2 # id value # 1: 1 2 # 2: 2 6 #rowbind... DT <- rbindlist( list(dt1,dt2), use.names = TRUE ) # id value # 1: 2 2 # 2: 3 4 # 3: 1 2 # 4: 2 6 #deselect duplicated id from the buttom up # assuming the last file in the list contains the updated values DT[ !duplicated(id, fromLast = TRUE), ] # id value # 1: 3 4 # 2: 1 2 # 3: 2 6 ```
You could use `dplyr`: ```r df_new %>% full_join(df_old, by="id") %>% transmute(id = id, value = coalesce(value.x, value.y)) ``` returns ```r id value 1 1 0.03432355 2 2 0.28396359 3 3 0.01121692 4 4 0.57214035 5 5 0.67337745 6 6 0.67637187 7 7 0.69178855 8 8 0.83953140 9 9 0.55350251 10 10 0.27050363 11 11 0.28181032 12 12 0.84292569 ``` given ```r df_new <- structure(list(id = 1:10, value = c(0.0343235526233912, 0.283963593421504, 0.011216921498999, 0.572140350239351, 0.673377452883869, 0.676371874753386, 0.691788548836485, 0.839531400706619, 0.553502510068938, 0.270503633422777 )), class = "data.frame", row.names = c(NA, -10L)) df_old <- structure(list(id = c(1, 4, 5, 3, 7, 9, 11, 12), value = c(0.111697669373825, 0.389851713553071, 0.252179590053856, 0.91874519130215, 0.504653975600377, 0.616259852424264, 0.281810319051147, 0.842925694771111)), class = "data.frame", row.names = c(NA, -8L)) ```
62,676,069
I have two databases - old one and update one. Both have same structures, with unique ID. If record changes - there's new record with same ID and new data. So after `rbind(m1,m2)` I have duplicated records. I can't just remove duplicated ID's, since the data could be updated. There's no way to tell the difference which record is new, beside it being in old file or update file. How can I merge two tables, and if there's row with duplicated ID, leave the one from newer file? I know I could add column to both and just `ifelse()` this, but I'm looking for something more elegant, preferably oneliner.
2020/07/01
[ "https://Stackoverflow.com/questions/62676069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10908320/" ]
Say you have: ``` old <- data.frame(id = c(1,2,3,4,5), val = c(21,22,23,24,25)) new <- data.frame(id = c(1,4), val = c(21,27)) ``` so the record with id 4 has changed in the new dataset and 1 is a pure duplicate. You can use `dplyr::anti_join` to find old records not in the new dataset and then just use `rbind` to add the new ones on. ``` joined <- rbind(anti_join(old,new, by = "id"),new) ```
You could use `dplyr`: ```r df_new %>% full_join(df_old, by="id") %>% transmute(id = id, value = coalesce(value.x, value.y)) ``` returns ```r id value 1 1 0.03432355 2 2 0.28396359 3 3 0.01121692 4 4 0.57214035 5 5 0.67337745 6 6 0.67637187 7 7 0.69178855 8 8 0.83953140 9 9 0.55350251 10 10 0.27050363 11 11 0.28181032 12 12 0.84292569 ``` given ```r df_new <- structure(list(id = 1:10, value = c(0.0343235526233912, 0.283963593421504, 0.011216921498999, 0.572140350239351, 0.673377452883869, 0.676371874753386, 0.691788548836485, 0.839531400706619, 0.553502510068938, 0.270503633422777 )), class = "data.frame", row.names = c(NA, -10L)) df_old <- structure(list(id = c(1, 4, 5, 3, 7, 9, 11, 12), value = c(0.111697669373825, 0.389851713553071, 0.252179590053856, 0.91874519130215, 0.504653975600377, 0.616259852424264, 0.281810319051147, 0.842925694771111)), class = "data.frame", row.names = c(NA, -8L)) ```
62,676,085
I have this type of DF ``` DF ID V1 1 A 2 V 3 C 4 B 5 L 6 L ``` I would like to get ``` ID V1 V2 1 A AA 2 V AV 3 C AC 4 B BB 5 L BL 6 L BL ``` I would like to concatenate A, B in V1 with other characters in V1. I used something like this ``` DF%>% mutate(V2 = ifelse ((V1 == "A" ), paste ("A", ID), ifelse ((V1 == "B")), paste ("B",V1), "")%>% V2 = na_if (V2, ""))%>% fill (V2) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13647125/" ]
Here is a `dplyr` solution. ``` library(dplyr) DF %>% mutate(flag = cumsum(V1 %in% c("A", "B"))) %>% group_by(flag) %>% mutate(V2 = paste0(first(V1), V1)) %>% ungroup() %>% select(-flag) ## A tibble: 6 x 3 # ID V1 V2 # <int> <chr> <chr> #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
Here is a way using base R ``` df <- transform(df, V2 = ave(x = V1, cumsum(V1 %in% c("A", "B")), #grouping variable FUN = function(x) paste0(x[1], x))) ``` Gives ``` df # ID V1 V2 #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
62,676,085
I have this type of DF ``` DF ID V1 1 A 2 V 3 C 4 B 5 L 6 L ``` I would like to get ``` ID V1 V2 1 A AA 2 V AV 3 C AC 4 B BB 5 L BL 6 L BL ``` I would like to concatenate A, B in V1 with other characters in V1. I used something like this ``` DF%>% mutate(V2 = ifelse ((V1 == "A" ), paste ("A", ID), ifelse ((V1 == "B")), paste ("B",V1), "")%>% V2 = na_if (V2, ""))%>% fill (V2) ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13647125/" ]
Here is a `dplyr` solution. ``` library(dplyr) DF %>% mutate(flag = cumsum(V1 %in% c("A", "B"))) %>% group_by(flag) %>% mutate(V2 = paste0(first(V1), V1)) %>% ungroup() %>% select(-flag) ## A tibble: 6 x 3 # ID V1 V2 # <int> <chr> <chr> #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
You can use `%in%` to find where *A* and *B* is. Use `unsplit` to replicate them and `paste0` to make the new string. ``` i <- DF$V1 %in% c("A", "B") DF$V2 <- paste0(unsplit(DF$V1[i], cumsum(i)), DF$V1) #DF$V2 <- paste0(rep(DF$V1[i], diff(c(which(i), length(i)))), DF$V1) #Alternative DF # ID V1 V2 #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
62,676,107
I am looking to create something like below using plotly, I have just started playing with the library. I am able create figures using the code below, however I cannot bring them under one figure as in the image. [![enter image description here](https://i.stack.imgur.com/EhzCY.png)](https://i.stack.imgur.com/EhzCY.png) ``` from sklearn.datasets import load_iris from sklearn import tree import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots import pdb #n_classes = 3 #plot_colors = "ryb" plot_step = 0.02 #pair = [0, 1] iris = load_iris() for index, pair in enumerate([[0, 1], [0, 2], [0, 3],[1, 2], [1, 3], [2, 3]]): fig = make_subplots(rows=2,cols = 3) i = (index//3)+1 #indexing for rows k =(index//2)+1 #indexing for cols #pdb.set_trace() X = iris.data[:, pair] y = iris.target clf = tree.DecisionTreeClassifier() clf = clf.fit(X, y) x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) #pdb.set_trace() fig.add_trace(go.Contour(z = Z, x = np.linspace(x_min,x_max,num = Z.shape[1]), y = np.linspace(y_min,y_max,num = Z.shape[0]) ),i,k) fig.update_layout( autosize=False, width=1000, height=800) for cl in np.unique(y): idx = np.where(y == cl) fig.add_trace(go.Scatter(x=X[idx, 0].ravel(), y=X[idx, 1].ravel(), mode = 'markers'),i,k) fig.show() ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1885727/" ]
Here is a `dplyr` solution. ``` library(dplyr) DF %>% mutate(flag = cumsum(V1 %in% c("A", "B"))) %>% group_by(flag) %>% mutate(V2 = paste0(first(V1), V1)) %>% ungroup() %>% select(-flag) ## A tibble: 6 x 3 # ID V1 V2 # <int> <chr> <chr> #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
Here is a way using base R ``` df <- transform(df, V2 = ave(x = V1, cumsum(V1 %in% c("A", "B")), #grouping variable FUN = function(x) paste0(x[1], x))) ``` Gives ``` df # ID V1 V2 #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
62,676,107
I am looking to create something like below using plotly, I have just started playing with the library. I am able create figures using the code below, however I cannot bring them under one figure as in the image. [![enter image description here](https://i.stack.imgur.com/EhzCY.png)](https://i.stack.imgur.com/EhzCY.png) ``` from sklearn.datasets import load_iris from sklearn import tree import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots import pdb #n_classes = 3 #plot_colors = "ryb" plot_step = 0.02 #pair = [0, 1] iris = load_iris() for index, pair in enumerate([[0, 1], [0, 2], [0, 3],[1, 2], [1, 3], [2, 3]]): fig = make_subplots(rows=2,cols = 3) i = (index//3)+1 #indexing for rows k =(index//2)+1 #indexing for cols #pdb.set_trace() X = iris.data[:, pair] y = iris.target clf = tree.DecisionTreeClassifier() clf = clf.fit(X, y) x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) #pdb.set_trace() fig.add_trace(go.Contour(z = Z, x = np.linspace(x_min,x_max,num = Z.shape[1]), y = np.linspace(y_min,y_max,num = Z.shape[0]) ),i,k) fig.update_layout( autosize=False, width=1000, height=800) for cl in np.unique(y): idx = np.where(y == cl) fig.add_trace(go.Scatter(x=X[idx, 0].ravel(), y=X[idx, 1].ravel(), mode = 'markers'),i,k) fig.show() ```
2020/07/01
[ "https://Stackoverflow.com/questions/62676107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1885727/" ]
Here is a `dplyr` solution. ``` library(dplyr) DF %>% mutate(flag = cumsum(V1 %in% c("A", "B"))) %>% group_by(flag) %>% mutate(V2 = paste0(first(V1), V1)) %>% ungroup() %>% select(-flag) ## A tibble: 6 x 3 # ID V1 V2 # <int> <chr> <chr> #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```
You can use `%in%` to find where *A* and *B* is. Use `unsplit` to replicate them and `paste0` to make the new string. ``` i <- DF$V1 %in% c("A", "B") DF$V2 <- paste0(unsplit(DF$V1[i], cumsum(i)), DF$V1) #DF$V2 <- paste0(rep(DF$V1[i], diff(c(which(i), length(i)))), DF$V1) #Alternative DF # ID V1 V2 #1 1 A AA #2 2 V AV #3 3 C AC #4 4 B BB #5 5 L BL #6 6 L BL ```