id
stringlengths 3
6
| prompt
stringlengths 100
55.1k
| response_j
stringlengths 30
18.4k
|
---|---|---|
242 | **Goal**: I aim to use t-SNE (t-distributed Stochastic Neighbor Embedding) in R for dimensionality reduction of my training data (with *N* observations and *K* variables, where *K>>N*) and subsequently aim to come up with the t-SNE representation for my test data.
**Example**: Suppose I aim to reduce the K variables to *D=2* dimensions (often, *D=2* or *D=3* for t-SNE). There are two R packages: `Rtsne` and `tsne`, while I use the former here.
```
# load packages
library(Rtsne)
# Generate Training Data: random standard normal matrix with J=400 variables and N=100 observations
x.train <- matrix(nrom(n=40000, mean=0, sd=1), nrow=100, ncol=400)
# Generate Test Data: random standard normal vector with N=1 observation for J=400 variables
x.test <- rnorm(n=400, mean=0, sd=1)
# perform t-SNE
set.seed(1)
fit.tsne <- Rtsne(X=x.train, dims=2)
```
where the command `fit.tsne$Y` will return the (100x2)-dimensional object containing the t-SNE representation of the data; can also be plotted via `plot(fit.tsne$Y)`.
**Problem**: Now, what I am looking for is a function that returns a prediction `pred` of dimension (1x2) for my test data based on the trained t-SNE model. Something like,
```
# The function I am looking for (but doesn't exist yet):
pred <- predict(object=fit.tsne, newdata=x.test)
```
(How) Is this possible? Can you help me out with this? | From the author himself (<https://lvdmaaten.github.io/tsne/>):
>
> Once I have a t-SNE map, how can I embed incoming test points in that
> map?
>
>
> t-SNE learns a non-parametric mapping, which means that it does not
> learn an explicit function that maps data from the input space to the
> map. Therefore, it is not possible to embed test points in an existing
> map (although you could re-run t-SNE on the full dataset). A potential
> approach to deal with this would be to train a multivariate regressor
> to predict the map location from the input data. Alternatively, you
> could also make such a regressor minimize the t-SNE loss directly,
> which is what I did in this paper (<https://lvdmaaten.github.io/publications/papers/AISTATS_2009.pdf>).
>
>
>
So you can't directly apply new data points. However, you can fit a multivariate regression model between your data and the embedded dimensions. The author recognizes that it's a limitation of the method and suggests this way to get around it. |
754 | Is $S$ linearly dependent on $\textsf V = \mathcal{F}(\Bbb R,\Bbb R)$ and $S = \{t, e^t ,\sin(t)\}$.
How to prove that a set of functions are linearly dependent? | Suppose we have some scalars $a\_0,a\_1,a\_2$ in $\Bbb R$ such that
$$a\_0t+a\_1e^t+a\_2\sin t =0 \tag{1}$$
for all real number $t$. Making $t=0$ this gives us $a\_1=0$. Returning to $(1)$, we have
$$a\_0t+a\_2\sin t =0 \tag{2}$$
Now, make $t=\pi$ and then $a\_0\pi=0$, which means $a\_0=0$. |
761 | I'm trying to implement simple ScopedExit class. Here's the code:
```
#include <iostream>
#include <functional>
template<class R, class... Args>
class ScopedExit
{
public:
ScopedExit(std::function<R(Args...)> exitFunction)
{
exitFunc_ = exitFunction;
}
~ScopedExit()
{
exitFunc_();
}
private:
std::function<R(Args...)> exitFunc_;
};
template<>
class ScopedExit<void>
{
public:
ScopedExit(std::function<void ()> exitFunction)
{
exitFunc_ = exitFunction;
}
~ScopedExit()
{
exitFunc_();
}
private:
std::function<void ()> exitFunc_;
};
void foo()
{
std::cout << "foo() called\n";
}
class Bar
{
public:
void BarExitFunc(int x, int y)
{
std::cout << "BarExitFunc called with x =" << x << "y = " << y << "\n";
}
};
int main()
{
Bar b;
std::cout << "Register scoped exit func\n";
{
ScopedExit<void, int, int> exitGuardInner(std::bind(&Bar::BarExitFunc, &b, 18, 11));
}
ScopedExit exitGuardOutter(foo);
std::cout << "About to exit from the scope\n";
return 0;
}
```
So, there are a couple of questions:
1. How to pass exit's function arguments to it? For example, I bind BarExitFunc with two integer arguments: 18 and 11. So how can I pass it to the exitFunc\_ in the destructor? I think I need something like invoke function with std::forward<>.
2. gcc 4.7.2 (from ideone.com) complains about exitGuardOutter. It says:
>
> prog.cpp:60:16: error: missing template arguments before ‘exitGuardOutter’
>
>
> prog.cpp:60:16: error: expected ‘;’ before ‘exitGuardOutter’
>
>
>
Thanks in advance. | >
> How to pass exit's function arguments to it? For example, I bind BarExitFunc with two integer arguments: 18 and 11. So how can I pass it to the exitFunc\_ in the destructor?
>
>
>
I can see no reason whatsoever to pass arguments to `exitFunc_` at call time in the destructor. Whatever you do, you'll have to provide those arguments upfront in the `ScopedExit` constructor anyway.
The most straightforward way is simply to use a `function<R()>` and `bind` any required arguments at the definition site like you're already doing:
```
ScopedExit<R> guard(std::bind(someFunction, someArg, otherArg));
```
This allows you to get rid of the variadic template arguments altogether and simplifies your template *a lot*.
---
Now, if what is bothering you is that you have to type `std::bind` and you would rather use such a syntax:
```
ScopedExit<R> guard(someFunction, someArg, otherArg);
```
Really, I don't see the point since it makes the template more complicated, but why not... Just bind/forward the arguments in the constructor itself and still store a `function<R()>`:
```
template<typename... Args>
ScopedExit(std::function<R(Args...)> exitFunction, Args&&... args)
{
exitFunc_ = std::bind(exitFunction, std::forward<Args>(args)...);
}
```
Now you systematically `bind` the function even if there are no arguments to bind, so you may want to specialize your class to avoid this useless `bind` when there are no arguments. This is left as an exercise.
---
>
> gcc 4.7.2 (from ideone.com) complains about exitGuardOutter
>
>
>
This is because `foo` isn't a `std::function` and the compiler can't deduce the correct template arguments. As already mentioned by @ForEveR you could just define your guard variable as `ScopedExit<void> guard(foo);`.
Or, **wrapping it all up** and keeping in mind what I first said (`bind` is best left out of your template and used at the definition site of your guard) you could just get rid of `std::function` in the constructor and generalize for any functor (which, BTW, is how the Standard library does whenever it needs a functor/callback). For storage you can just use `std::function<void()>` since it accepts non-void return types too:
```
class ScopedExit
{
public:
template<typename Functor>
ScopedExit(Functor exitFunction)
{
exitFunc_ = exitFunction;
}
~ScopedExit()
{
exitFunc_();
}
private:
std::function<void()> exitFunc_;
};
int foo() { return 0; }
struct Bar {
void bye(int, int) {}
};
struct Baz {
void operator ()() {}
};
int main() {
const std::string what = "lambda!";
ScopedExit guard1([&]() { std::cout << "yay a " << what << std::endl; });
ScopedExit guard2(foo); // note how std::function<void()> accepts non-void return types
Bar b;
ScopedExit guard3(std::bind(&Bar::bye, &b, 1, 2));
ScopedExit guard4(Baz());
}
```
Note how your original variadic template class has now become a flexible non-template class with just a templated constructor whose template argument is deduced automatically, and which accepts almost[see note below] any kind of functor you can think about.
---
**Note:** I said *almost* any functor because this doesn't work with default arguments:
```
void foobar(int = 0) {}
ScopedExit guard5(foobar); // error: too few arguments to function
```
Even if you stored a `Functor` directly instead of a `std::function<void()>` you wouldn't be able to make use of the default arguments anyway (the signature of `foobar` is still `void(int)` even with a default argument) so one always has to handle this corner-case at the definition site with something like:
```
void foobar(int = 0) {}
ScopedExit guard5([]() { foobar(); });
``` |
1282 | Bellow is a PHP script.
I tried to implement the Observer pattern (without MVC structure)... only basic.
The error which is encountered has been specified in a comment.
First I tried to add User objects to the UsersLibrary repository. There was a error such as User::update() does not exists or something.
Why is that error encountered? What fix should be applied and how?
```
interface IObserver {
public function update(IObservable $sender);
}
interface IObservable {
public function addObserver(IObserver $obj);
public function notify();
}
class UsersLibrary implements IObservable {
private $container;
private $contor;
//private $z;
public function __construct() {//IObserver $a) {
$this->container = array();
$this->contor = 0;
echo "<div>[constructing UsersLibrary...]</div>";
$this->addObserver(new Logger());
//$this->z = $a;
}
public function add($obj) {
echo "<div>[adding a new user...]</div>";
$this->container[$this->contor] = $obj;
$this->contor++;
$this->notify();
}
public function get($index) {
return $this->container[$index];
}
public function addObserver(IObserver $obj) {
$this->container[] = $obj;
}
public function notify() {
echo "<div>[notification in progress...]</div>";
foreach($this->container as $temp) {
//echo $temp;
#################################################################
$temp->update(); //--------ERROR
//Fatal Error: Call to a member function update() on a non-object.
#################################################################
}
//$this->container[0]->update();
//$this->z->update($this);
}
}
class User {
private $id;
private $name;
public function __construct($id, $name) {
$this->id = $id;
$this->name = $name;
}
public function getId() {
return $this->id;
}
public function getName() {
return $this->name;
}
}
class Logger implements IObserver {
public function __construct() {
echo "<div>[constructing Logger...]</div>";
}
public function update(IObservable $sender) {
echo "<div>A new user has been added.</div>";
}
}
$a = new UsersLibrary(); //new Logger());
//$a->add(new User(1, "DemoUser1"));
//$a->add(new User(2, "DemoUser2"));
$a->add("Demo");
echo $a->get(0);
//echo $a->get(0)->getName();
``` | Your `User` class is not implementing `interface IObserver` and therefore is not forced to have the method `update()`.
You have to instantiate a `new User()` in order to add it to the `UsersLibrary`:
```
$library = new UsersLibrary();
$user = new User(1, "Demo");
$library->add($user);
```
Also, you are mixing **Users** and **Loggers** into your UsersLibrary container. Maybe think about separating the containers for them? |
1347 | Brand new to Python and could use some help importing multiple Excel files to separate Pandas dataframes. I have successfully implemented the following code, but of course it imports everything into one frame. I would like to import them into df1, df2, df3, df4, df5, etc.
Anything helps, thank you!
```
import pandas as pd
import glob
def get_files():
directory_path = input('Enter directory path: ')
filenames = glob.glob(directory_path + '/*.xlsx')
number_of_files = len(filenames)
df = pd.DataFrame()
for f in filenames:
data = pd.read_excel(f, 'Sheet1')
df = df.append(data)
print(df)
print(number_of_files)
get_files()
``` | The easiest way to do that is to use a list. Each element of the list is a dataframe
```
def get_files():
directory_path = input('Enter directory path: ')
filenames = glob.glob(directory_path + '/*.xlsx')
number_of_files = len(filenames)
df_list = []
for f in filenames:
data = pd.read_excel(f, 'Sheet1')
df_list.append(data)
print(df_list)
print(number_of_files)
return df_list
get_files()
```
You can then access your dataframes with `df_list[0]`, `df_list[1]`... |
1668 | I want to find (or make) a python script that reads a different python script line by line and prints the commands executed and the output right there after.
Suppose you have a python script, `testfile.py` as such:
```
print("Hello world")
for i in range(3):
print(f"i is: {i}")
```
Now, I want a different python script that parses the `testfile.py` and outputs the following:
```
print("Hello world")
## Hello world
for i in range(3):
print(f"i is: {i}")
## i is: 0
## i is: 1
## i is: 2
```
Any suggestions on existing software or new code on how to achieve this is greatly appreciated!
---
Attempts / concept code:
========================
### Running `ipython` from python:
One of the first thoughts were to run ipython from python using `subprocess`:
```
import subprocess
import re
try:
proc = subprocess.Popen(args=["ipython", "-i"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True)
# Delimiter to know when to stop reading
OUTPUT_DELIMITER = ":::EOL:::"
# Variable to contain the entire interaction:
output_string = ""
# Open testfile.py
with open("testfile.py") as file_:
for line in file_:
# Read command
cmd = line.rstrip()
# Add the command to the output string
output_string += cmd + "\n"
proc.stdin.write(f"{cmd}\n")
# Print the delimiter so we know when to end:
proc.stdin.write('print("{}")\n'.format(OUTPUT_DELIMITER))
proc.stdin.flush()
# Start reading output from ipython
while True:
thisoutput = proc.stdout.readline()
thisoutput = thisoutput.rstrip()
# Now check if it's the delimiter
if thisoutput.find(OUTPUT_DELIMITER) >= 0:
break
output_string += thisoutput + "\n"
except Exception as e:
proc.stdout.close()
proc.stdin.close()
raise
proc.stdout.close()
proc.stdin.close()
print("-" * 4 + "START OUTPUT" + "-" * 4)
print(output_string)
print("-" * 4 + "END OUTPUT" + "-" * 4)
```
In this approach, the problem becomes indented blocks, like the `for` loop.
Ideally something like this would work using just plain `python` (and not `ipython`). | [`code.InteractiveConsole.interact`](https://docs.python.org/3/library/code.html#code.InteractiveConsole.interact) does exactly what is asked. |
1743 | We received a note from the security review team highlighting a CRUD/FLS vulnerability in our package and in the note it says there is a "Instances of SELECT vulnerability found across the application".
An example provided is shown below in a "with sharing" class:
>
> myAttachments = [SELECT name, id, parentid, CreatedDate,CreatedById FROM Attachment WHERE parentid=:myAccount.id];
>
>
>
If the user does not have access to the object or the field, the result will be null. So the FLS must be enforced.
The documentation [here](https://developer.salesforce.com/page/Enforcing_CRUD_and_FLS) does not specify the issue. How can we resolve this issue? | I contacted Salesforce's Support team and they provided [this article](https://support.microsoft.com/en-us/kb/2994633) as a temporary fix.
They also mentioned:
>
> "Our R&D team has investigated this matter and logged a New Issue for it to be repaired. Unfortunately, I cannot provide a timeline as to when this repair will be implemented due to Safe Harbor constraints."
>
>
>
[Salesforce Known Issue](https://success.salesforce.com/issues_view?id=a1p3A000000IZSEQA4) |
1859 | Haskell does not have loops like many other languages. I understand the reasoning behind it and some of the different approaches used to solve problems without them. However, when a loop structure is necessary, I am not sure if the way I'm creating the loop is correct/good.
For example (trivial function):
```
dumdum = do
putStrLn "Enter something"
num <- getLine
putStrLn $ "You entered: " ++ num
dumdum
```
This works fine, but is there a potential problem in the code?
A different example:
```
a = do
putStrLn "1"
putStrLn "2"
a
```
If implemented in an imperative language like Python, it would look like:
```python
def a():
print ("1")
print ("2")
a()
```
This eventually causes a maximum recursion depth error. This does not seem to be the case in Haskell, but I'm not sure if it might cause potential problems.
I know there are other options for creating loops such as `Control.Monad.LoopWhile` and `Control.Monad.forever` -- should I be using those instead? (I am still very new to Haskell and do not understand monads yet.) | For general iteration, having a recursive function call itself is definitely the way to go. If your calls are in [tail position](https://en.wikipedia.org/wiki/Tail_call), they don't use any extra stack space and behave more like `goto`1. For example, here is a function to sum the first n integers using constant stack space2:
```
sum :: Int -> Int
sum n = sum' 0 n
sum' !s 0 = s
sum' !s n = sum' (s+n) (n-1)
```
It is roughly equivalent to the following pseudocode:
```
function sum(N)
var s, n = 0, N
loop:
if n == 0 then
return s
else
s,n = (s+n, n-1)
goto loop
```
Notice how in the Haskell version we used function parameters for the sum accumulator instead of a mutable variable. This is very common pattern for tail-recursive code.
So far, general recursion with tail-call-optimization should give you all the looping power of gotos. The only problem is that manual recursion (kind of like gotos, but a little better) is relatively unstructured and we often need to carefully read code that uses it to see what is going on. Just like how imperative languages have looping mechanisms (for, while, etc) to describe most common iteration patterns, in Haskell we can use higher order functions to do a similar job. For example, many of the list processing functions like `map` or `foldl'`3 are analogous to straightforward for-loops in pure code and when dealing with monadic code there are functions in Control.Monad or in the [monad-loops](http://hackage.haskell.org/package/monad-loops) package that you can use. In the end, its a matter of style but I would err towards using the higher order looping functions.
---
1 You might want to check out ["Lambda the ultimate GOTO"](http://library.readscheme.org/page1.html), a classical article about how tail recursion can be as efficient as traditional iteration. Additionally, since Haskell is a lazy languages, there are also some situations where recursion at non-tail positions can still run in O(1) space (search for "Tail recursion modulo cons")
2 Those exclamation marks are there to make the accumulator parameter be eagerly evaluated, so the addition happens at the same time as the recursive call (Haskell is lazy by default). You can omit the "!"s if you want but then you run the risk of running into a [space leak](http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27).
3 Always use `foldl'` instead of `foldl`, due to the previously mentioned space leak issue. |
1893 | I would like to have the SQL Server PowerShell extensions available to me whenever I start PowerShell by loading the snap-ins in my profile.ps1 script. I found an article [here](http://blogs.msdn.com/b/mwories/archive/2008/06/14/sql2008_5f00_powershell.aspx) with a script example that shows how to do this, and this works fine on my 32-bit Windows XP box.
Unfortunately, on my 64-bit Windows 7 machine, this blows up. If I try to launch this script with the 64-bit PowerShell, I get:
```
Add-PSSnapin : No snap-ins have been registered for Windows PowerShell version 2.
At C:\Users\xxxx\Documents\WindowsPowerShell\profile.ps1:84 char:13
+ Add-PSSnapin <<<< SqlServerCmdletSnapin100
+ CategoryInfo : InvalidArgument: (SqlServerCmdletSnapin100:String
[Add-PSSnapin], PSArgumentException
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand
```
If I run this instead in a 32-bit PowerShell, I get:
```
Get-ItemProperty : Cannot find path 'HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds \Microsoft.SqlServer.Management.PowerShell.sqlps' because it does not exist.
At C:\Users\xxxx\Documents\WindowsPowerShell\profile.ps1:39 char:29
+ $item = Get-ItemProperty <<<< $sqlpsreg
+ CategoryInfo : ObjectNotFound: (HKLM:\SOFTWARE\...owerShell.sqlps:String) [Get-ItemProperty], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemPropertyCommand
```
I'd like to be able to run this in a 64-bit PowerShell if possible. To this end, I tracked down what I thought was the Powershell extension dlls and in a 64-bit Administrator elevated PowerShell I ran:
```
cd "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn"
installutil Microsoft.SqlServer.Management.PSProvider.dll
installutil Microsoft.SqlServer.Management.PSSnapins.dll
```
No dice. Although installutil seemed to indicate success, I still get the "No snap-ins have been registered for Windows PowerShell version 2" error message when I run the script.
Anyone have any suggestions as to where I go from here? | I've used this script without issue on x64 machines. The problem with the x86 invocation is that the script looks for registry keys which on an x64 instance are only accessible from x64 PowerShell. For the x64 invocation you could try registering the snapins since that is the error message you're receiving. Run as administrator...
Change this:
```
cd $sqlpsPath
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
```
to this:
```
cd $sqlpsPath
$framework=$([System.Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory())
Set-Alias installutil "$($framework)installutil.exe"
installutil Microsoft.SqlServer.Management.PSSnapins.dll
installutil Microsoft.SqlServer.Management.PSProvider.dll
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
```
An even better solution is not use add-pssnapin instead turn sqlps into a module. I have blog post here:
<http://sev17.com/2010/07/10/making-a-sqlps-module>
Update for SQL Server 2012 - now ships a sqlps module you can install instead of the above blog: <http://www.microsoft.com/en-us/download/details.aspx?id=35580> |
2237 | I have Json Data through which I'm doing this .
```
fun getFact(context: Context) = viewModelScope.launch{
try {
val format = Json {
ignoreUnknownKeys = true
prettyPrint = true
isLenient = true
}
val factJson = context.assets.open("Facts.json").bufferedReader().use {
it.readText()
}
val factList = format.decodeFromString<List<FootballFact>>(factJson)
_uiState.value = ViewState.Success(factList)
} catch (e: Exception) {
_uiState.value = ViewState.Error(exception = e)
}
}
```
This is the way i m getting my job from viewModle in Ui sceeen
```
viewModel.getFact(context)
when (val result =
viewModel.uiState.collectAsState().value) {
is ViewState.Error -> {
Toast.makeText(
context,
"Error ${result.exception}",
Toast.LENGTH_SHORT
).show()
}
is ViewState.Success -> {
val factsLists = mutableStateOf(result.fact)
val randomFact = factsLists.value[0]
FactCard(quote = randomFact.toString()) {
factsLists.value.shuffled()
}
}
}
```
I have fact card where i want to show that fact .also i have there a lambda for click where i want my factList to refresh every time whenever is clicked.
```
@Composable
fun FactCard(quote: String , onClick : ()-> Unit) {
val fact = remember { mutableStateOf(quote)}
Box(
contentAlignment = Alignment.Center,
modifier = Modifier.
.clickable { onClick() }
) {
Text(.. )
}
}
```
I don't know how to approach this, i think there is silly thing I'm doing. | Composables can only recompose when you update state data. You aren't doing that. Your click event should return the new quote that you want to display. You then set `fact.value` to the new quote. Calling `fact.value` with a new value is what triggers a recompose:
```
when (val result = viewModel.uiState.collectAsState().value) {
is ViewState.Error -> {
Toast.makeText(
context,
"Error ${result.exception}",
Toast.LENGTH_SHORT
).show()
}
is ViewState.Success -> {
val factsLists = mutableStateOf(result.fact)
val randomFact = factsLists.value[0]
FactCard(quote = randomFact.toString()) {
return factsLists.value.shuffled()[0]
}
}
}
@Composable
fun FactCard(quote: String , onClick : ()-> String) {
var fact = remember { mutableStateOf(quote)}
Box(
contentAlignment = Alignment.Center,
modifier = Modifier.
.clickable {
fact.value = onClick()
}
) {
Text(.. )
}
}
``` |
2329 | Using Jquery How can I select an option by either its value or text in 1 statement?
```
<select>
<option value="0">One</option>
<option value="1">Two</option>
</select>
```
I can do these 2 statements individually but how can I combine the following 2 statements into 1 select OR statement?
```
$('select option[text="Two"]'); //This selects by text
$('select option[value="4"]'); //This selects by value
``` | It pretty simple to respond orientation change in react native. Every view in react native have a listener called **onLayout** which get invoked upon orientation change. We just need to implement this. It's better to store dimension in state variable and update on each orientation change so that re-rendering happens after change. Other wise we need to reload the view to respond the orientation change.
[![enter image description here](https://i.stack.imgur.com/omHkY.gif)](https://i.stack.imgur.com/omHkY.gif)
```
import React, { Component } from "react";
import { StyleSheet, Text, View, Image, Dimensions } from "react-native";
var { height, width } = Dimensions.get("window");
export default class Com extends Component {
constructor() {
console.log("constructor");
super();
this.state = {
layout: {
height: height,
width: width
}
};
}
_onLayout = event => {
console.log(
"------------------------------------------------" +
JSON.stringify(event.nativeEvent.layout)
);
this.setState({
layout: {
height: event.nativeEvent.layout.height,
width: event.nativeEvent.layout.width
}
});
};
render() {
console.log(JSON.stringify(this.props));
return (
<View
style={{ backgroundColor: "red", flex: 1 }}
onLayout={this._onLayout}
>
<View
style={{
backgroundColor: "green",
height: this.state.layout.height - 10,
width: this.state.layout.width - 10,
margin: 5
}}
/>
</View>
);
}
}
``` |
2575 | I adapted the following code found [here](http://pythonexcels.com/automating-pivot-tables-with-python/) to create a pivot table in my existing excel sheet:
```
import win32com.client as win32
win32c = win32.constants
import sys
import itertools
tablecount = itertools.count(1)
def addpivot(wb,sourcedata,title,filters=(),columns=(),
rows=(),sumvalue=(),sortfield=""):
newsheet = wb.Sheets.Add()
newsheet.Cells(1,1).Value = title
newsheet.Cells(1,1).Font.Size = 16
tname = "PivotTable%d"%tablecount.next()
pc = wb.PivotCaches().Add(SourceType=win32c.xlDatabase,
SourceData=sourcedata)
pt = pc.CreatePivotTable(TableDestination="%s!R4C1"%newsheet.Name,
TableName=tname,
DefaultVersion=win32c.xlPivotTableVersion10)
for fieldlist,fieldc in ((filters,win32c.xlPageField),
(columns,win32c.xlColumnField),
(rows,win32c.xlRowField)):
for i,val in enumerate(fieldlist):
wb.ActiveSheet.PivotTables(tname).PivotFields(val).Orientation = fieldc
wb.ActiveSheet.PivotTables(tname).PivotFields(val).Position = i+1
wb.ActiveSheet.PivotTables(tname).AddDataField(wb.ActiveSheet.PivotTables(tname).
PivotFields(sumvalue),sumvalue,win32c.xlSum)
def runexcel():
excel = win32.gencache.EnsureDispatch('Excel.Application')
#excel.Visible = True
try:
wb = excel.Workbooks.Open('18.03.14.xls')
except:
print "Failed to open spreadsheet 18.03.14.xls"
sys.exit(1)
ws = wb.Sheets('defaulters')
xldata = ws.UsedRange.Value
newdata = []
for row in xldata:
if len(row) == 4 and row[-1] is not None:
newdata.append(list(row))
rowcnt = len(newdata)
colcnt = len(newdata[0])
wsnew = wb.Sheets.Add()
wsnew.Range(wsnew.Cells(1,1),wsnew.Cells(rowcnt,colcnt)).Value = newdata
wsnew.Columns.AutoFit()
src = "%s!R1C1:R%dC%d"%(wsnew.Name,rowcnt,colcnt)
addpivot(wb,src,
title="Employees by leads",
filters=("Leads",),
columns=(),
rows=("Name",),
sumvalue="Actual hours",
sortfield=())
if int(float(excel.Version)) >= 12:
wb.SaveAs('new18.03.14.xlsx',win32c.xlOpenXMLWorkbook)
else:
wb.SaveAs('new18.03.14.xls')
excel.Application.Quit()
if __name__ == "__main__":
runexcel()
```
This line of code,
`wb.ActiveSheet.PivotTables(tname).AddDataField(wb.ActiveSheet.PivotTables(tname).PivotFields(sumvalue),sumvalue,win32c.xlSum)`
returns the following error:
`pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft Excel', u'PivotFields method of PivotTable class failed', u'xlmain11.chm', 0, -2146827284), None)`.
When I remove that line, the pivot table is generated without any data fields. Is there something I'm doing wrong? | As this is the one of the first Google hits when searching for Excel pivot tables from Python, I post my example code. This code generates a simple pivot table in Excel through a COM server, with some basic filters, columns, rows, and some number formatting applied.
I hope this helps someone not to waste half a day on it (like I did...)
```
import win32com.client
Excel = win32com.client.gencache.EnsureDispatch('Excel.Application') # Excel = win32com.client.Dispatch('Excel.Application')
win32c = win32com.client.constants
wb = Excel.Workbooks.Add()
Sheet1 = wb.Worksheets("Sheet1")
TestData = [['Country','Name','Gender','Sign','Amount'],
['CH','Max' ,'M','Plus',123.4567],
['CH','Max' ,'M','Minus',-23.4567],
['CH','Max' ,'M','Plus',12.2314],
['CH','Max' ,'M','Minus',-2.2314],
['CH','Sam' ,'M','Plus',453.7685],
['CH','Sam' ,'M','Minus',-53.7685],
['CH','Sara','F','Plus',777.666],
['CH','Sara','F','Minus',-77.666],
['DE','Hans','M','Plus',345.088],
['DE','Hans','M','Minus',-45.088],
['DE','Paul','M','Plus',222.455],
['DE','Paul','M','Minus',-22.455]]
for i, TestDataRow in enumerate(TestData):
for j, TestDataItem in enumerate(TestDataRow):
Sheet1.Cells(i+2,j+4).Value = TestDataItem
cl1 = Sheet1.Cells(2,4)
cl2 = Sheet1.Cells(2+len(TestData)-1,4+len(TestData[0])-1)
PivotSourceRange = Sheet1.Range(cl1,cl2)
PivotSourceRange.Select()
Sheet2 = wb.Worksheets(2)
cl3=Sheet2.Cells(4,1)
PivotTargetRange= Sheet2.Range(cl3,cl3)
PivotTableName = 'ReportPivotTable'
PivotCache = wb.PivotCaches().Create(SourceType=win32c.xlDatabase, SourceData=PivotSourceRange, Version=win32c.xlPivotTableVersion14)
PivotTable = PivotCache.CreatePivotTable(TableDestination=PivotTargetRange, TableName=PivotTableName, DefaultVersion=win32c.xlPivotTableVersion14)
PivotTable.PivotFields('Name').Orientation = win32c.xlRowField
PivotTable.PivotFields('Name').Position = 1
PivotTable.PivotFields('Gender').Orientation = win32c.xlPageField
PivotTable.PivotFields('Gender').Position = 1
PivotTable.PivotFields('Gender').CurrentPage = 'M'
PivotTable.PivotFields('Country').Orientation = win32c.xlColumnField
PivotTable.PivotFields('Country').Position = 1
PivotTable.PivotFields('Country').Subtotals = [False, False, False, False, False, False, False, False, False, False, False, False]
PivotTable.PivotFields('Sign').Orientation = win32c.xlColumnField
PivotTable.PivotFields('Sign').Position = 2
DataField = PivotTable.AddDataField(PivotTable.PivotFields('Amount'))
DataField.NumberFormat = '#\'##0.00'
Excel.Visible = 1
wb.SaveAs('ranges_and_offsets.xlsx')
Excel.Application.Quit()
``` |
3186 | I use a `entity` form type to provide a list of `Position` entities in a form. I use it often enough (each with the same "setup" code to customize it) that I've decided to make a custom form type from it for better re-use.
Here's the current form type:
```
class PositionType extends AbstractType
{
private $om;
public function __construct(ObjectManager $om, $mode)
{
$this->om = $om;
}
public function buildForm(FormBuilderInterface $builder, array $options)
{
}
public function setDefaultOptions(OptionsResolverInterface $resolver)
{
// I need to pass "mode" as an option when building the form.
$mode = ???
$query_builder = function (EntityRepository $em) use ($mode) {
// Limit the positions returned based on the editing mode
return $em
->createQueryBuilder('Position')
->orderBy('Position.name')
->leftJoin('Position.type', 'Type')
->andWhere('Type.id IN (:ids)')
->setParameter('ids', Type::typesForMode($mode))
;
};
$resolver
->setRequired(array('mode'))
->setDefaults(array(
'label' => 'Position',
'class' => 'AcmeBundle:Position',
'property' => 'name',
'query_builder' => $query_builder,
'empty_value' => '',
'empty_data' => null,
'constraints' => array(
new NotBlank(),
),
))
;
}
public function getParent()
{
return 'entity';
}
public function getName()
{
return 'position';
}
}
```
Don't worry about the specifics in the query builder, that doesn't matter. The part that does matter is I'm trying to use a form type option in the query builder.
How can I do this? The problem is I can't use `$mode` (the option I want to pass to alter the query builder) in `setDefaultOptions`.
I was beginning to look for a way to set the query builder from inside `buildForm` but I'm not sure I can do that. | This is fairly easy to achieve. You can build an option that depends on another option.
[OptionResolver Component - Default Values that Depend on another Option](http://symfony.com/doc/current/components/options_resolver.html#default-values-that-depend-on-another-option)
Basically you will do:
```
$resolver
->setRequired(array('mode', 'em')) // "em" for EntityManager as well
->setDefaults(array(
'label' => 'Position',
'class' => 'AcmeBundle:Position',
'property' => 'name',
#####################################################
'query_builder' => function(Options $options){
// Obviously you will need to pass the EntityManager
$em = $options['em'];
// Limit the positions returned based on the editing mode
return $em
->createQueryBuilder('Position')
->orderBy('Position.name')
->leftJoin('Position.type', 'Type')
->andWhere('Type.id IN (:ids)')
->setParameter('ids', Type::typesForMode($options['mode'])) //
;
},
####################################
'empty_value' => '',
'empty_data' => null,
'constraints' => array(
new NotBlank(),
),
))
;
```
This is just a rough representation of what `OptionsResolver` can do. Hope it helps :) |
3266 | Suppose that a Tor client wants to access a certain hidden service. According to the protocol, instead of submitting a request directly to the server IP (which is hidden[1][2]), this client submit a request via a series of relays.
However, at some point, there will be a final relay in charge of delivering the client's message specifically to the server running the hidden service. In order to do so, this final relay must know the IP of this hidden server, otherwise the current internet infrastructure cannot deliver the message.
If the aforementioned steps are indeed correct, this means that in order to host a website using TOR Hidden Service you must reveal the IP address to a final relay. Therefore, Tor network does not hide the IP address of hidden services.
How to reconcile that? Am I missing something?
---
[1]: "TOR Hidden Service allows you to host a website, without revealing where the website is, and hence protects the identity of the publisher/webmaster.", [WikiBooks](https://en.wikibooks.org/wiki/How_to_Protect_your_Internet_Anonymity_and_Privacy/TOR_Hidden_Service_for_Anonymous_Websites)
[2]: "The Tor network hides the IP address of hidden services, instead using onion addresses and public keys to keep the real location hidden.", [Privay.net](https://privacy.net/make-site-visible-dark-web-tor-hidden-services/) | Tor uses TCP tunnels, so - regardless of the previous answer - no need to use it. The hidden service is reached from the Tor node that is hosting it, usually through a localhost. The scenario you've described about IP revealing - yes, it *can* be a privacy problem. The design doc states clear - the system is anonymizing mostly the client, not the server - it's only rudimentary in a *standard* setup. To conceal your server IP use bridges-only for your hosting server(s) - that will elevate the privacy, but if you want both client and server to be equally anonymized - use I2P, not Tor - it's designed for exactly that purpose |
3284 | Here is my problem:
I have a task running a Docker image on Amazon ECS but I would like to make a new Docker image from the running instance of the container.
I see the id of the instance on Amazon ECS; I have made an AMI but I would like to make a new docker image that I can pull from Amazon.
Any ideas?
Regards and thanks. | To create a image from container execute the command below:
>
> `docker commit container_id imagename`
>
>
> |
3444 | Can Anyone Please Convert the following Arduino Code to Embedded c code? I am very thankful to the one who converts this to an embedded c code. (this code is for Arduino lcd interfacing with Ultrasonic sensor)
```
#include <LiquidCrystal.h>
int inches = 0;
int cm = 0;
// initialize the library with the numbers of the interface pins
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
void setup() {
// set up the LCD's number of columns and rows:
lcd.begin(16, 2);
// Print a message to the LCD.
pinMode(7, INPUT);
}
void loop() {
lcd.clear();
cm = 0.01723 * readUltrasonicDistance(7);
inches = (cm / 2.54);
if (cm<40){
lcd.setCursor(0, 0);
// print the number of seconds since reset:
lcd.print("Caution: ");
lcd.setCursor(0,1);
lcd.print("Objects Nearby");
delay(1000);
}
}
long readUltrasonicDistance(int pin)
{
pinMode(pin, OUTPUT); // Clear the trigger
digitalWrite(pin, LOW);
delayMicroseconds(2);
// Sets the pin on HIGH state for 10 micro seconds
digitalWrite(pin, HIGH);
delayMicroseconds(10);
digitalWrite(pin, LOW);
pinMode(pin, INPUT);
// Reads the pin, and returns the sound wave travel time in microseconds
return pulseIn(pin, HIGH);
}
``` | Accepted answer is not an accessible solution.
----------------------------------------------
I have made some corrections and some observations here. Do not use the accepted answer in production if you stumble across this question in the future. It is an awful experience with a keyboard.
The answer below fixes some of the CSS issues to make it more accessible.
However **I would recommend you reconsider the no JavaScript requirement.**
I can understand having a good fall-back (which the example I give below with the fixes is) but there is no way you can make a fully accessible set of CSS only tabs.
Firstly you should use WAI-ARIA to complement your HTML to make things even more clear for screen readers. See the [tabs examples on W3C](http://%20https://www.w3.org/TR/wai-aria-practices/examples/tabs/tabs-1/tabs.html) to see what WAI-ARIA roles you should be using. This is NOT possible without JavaScript as states need to change (`aria-hidden` for example should change).
Secondly, you should be able to use certain shortcut keys. Press the `home` key for example in order to return to the first tab, something you can only do with a little JS help.
With that being said here are a few things I fixed with the accepted answer to at least give you a good starting point as your 'no JavaScript **fallback**'.
### Problem 1 - `tabindex` on the label.
By adding this you are creating a focusable element that cannot be activated via keyboard (you cannot press `space` or `Enter` on the label to change selection, unless you use JavaScript).
In order to fix this I simply removed the `tabindex` from the labels.
### Problem 2 - no focus indicators when navigating via keyboard.
In the example the tabs only work when you are focused on the radio buttons (which are hidden). However at this point there is no focus indicator as the styling is applying styling to the checkbox when it is focused and not to its label.
In order to fix this I adjusted the CSS with the following
```
/*make it so when the checkbox is focused we add a focus indicator to the label.*/
.tabs__input:focus + label {
outline: 2px solid #333;
}
```
Problem 3 - using the same state for `:hover` and `:focus` states.
------------------------------------------------------------------
This is another bad practice that needs to go away, always have a different way of showing hover and focus states. Some screen reader and screen magnifier users will use their mouse to check they have the correct item focused and orientate themselves on a page. Without a separate hover state it is difficult to check you are hovered over a focused item.
```
/*use a different colour background on hover, you should not use the same styling for hover and focus states*/
.tabs__label:hover{
background-color: #ccc;
}
```
Example
-------
In the example I have added a hyperlink at the top so you can see where your focus indicator is when using a keyboard.
When your focus indicator is on one of the two tabs you can press the arrow keys to change tab (which is expected behaviour) and the focus indicator will adjust accordingly to make it clear which tab was selected.
```css
.tabs {
background-color: #eee;
min-height: 400px;
}
.tabs__list {
border-bottom: 1px solid black;
display: flex;
flex-direction: row;
list-style: none;
margin: 0;
padding: 0;
position: relative;
}
.tabs__tab {
padding: 0.5rem;
}
.tabs__content {
display: none;
left: 0;
padding: 0.5rem;
position: absolute;
top: 100%;
}
.tabs__input {
position: fixed;
top:-100px;
}
.tabs__input+label {
cursor: pointer;
}
.tabs__label:hover{
background-color: #ccc;
}
.tabs__input:focus + label {
outline: 2px solid #333;
}
.tabs__input:checked+label {
color: red;
}
.tabs__input:checked~.tabs__content {
display: block;
}
```
```html
<a href="#">A link so you can see where your focus indicator is</a>
<div class="tabs">
<ul class="tabs__list">
<li class="tabs__tab">
<input class="tabs__input" type="radio" id="tab-0" name="tab-group" checked>
<label for="tab-0" class="tabs__label" role="button">Tab 0</label>
<div class="tabs__content">
Tab 0 content
</div>
</li>
<li class="tabs__tab">
<input class="tabs__input" type="radio" id="tab-1" name="tab-group">
<label for="tab-1" class="tabs__label" role="button">Tab 1</label>
<div class="tabs__content">
Tab 1 content
</div>
</li>
</ul>
</div>
``` |
3578 | Can anybody help in knowing whether IFC entity type names are case sensitive or case insensitive.
For example: Can we replace `IFCPERSON` with `IfcPerson` (camel case) or `ifcperson` (small) in an \*.ifc file? | How about applying the following convention in every single context:
Simply assume that they are case sensitive and work accordingly.
If you always do that, you will never have a problem.
If you see different casing examples, and all of them work, you can assume it is not case sensitive.
Otherwise, you will always be on the safe side if you simply follow the case conventions that you see and are proven.
Furthermore, you should always implement unit tests for every piece of functionality.
If you have questions about case sensitivity, implement unit tests to prove your assumptions right. |
3850 | I'm trying to use a plotly example in Python 3, but getting a syntax error in this line:
```
return map(lambda (x, y, an): (x, y), cornersWithAngles)
```
I already read that using parentheses to unpack the arguments in a lambda is not allowed in Python 3, but I don't know how exactly to adjust my code to solve that problem.
Here is the complete code (error is on line 16):
```
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.tools import FigureFactory as FF
import scipy
def PolygonSort(corners):
n = len(corners)
cx = float(sum(x for x, y in corners)) / n
cy = float(sum(y for x, y in corners)) / n
cornersWithAngles = []
for x, y in corners:
an = (np.arctan2(y - cy, x - cx) + 2.0 * np.pi) % (2.0 * np.pi)
cornersWithAngles.append((x, y, an))
cornersWithAngles.sort(key = lambda tup: tup[2])
return map(lambda (x, y, an): (x, y), cornersWithAngles)
def PolygonArea(corners):
n = len(corners)
area = 0.0
for i in range(n):
j = (i + 1) % n
area += corners[i][0] * corners[j][1]
area -= corners[j][0] * corners[i][1]
area = abs(area) / 2.0
return area
corners = [(0, 0), (3, 0), (2, 10), (3, 4), (1, 5.5)]
corners_sorted = PolygonSort(corners)
area = PolygonArea(corners_sorted)
x = [corner[0] for corner in corners_sorted]
y = [corner[1] for corner in corners_sorted]
annotation = go.Annotation(
x=5.5,
y=8.0,
text='The area of the polygon is approximately %s' % (area),
showarrow=False
)
trace1 = go.Scatter(
x=x,
y=y,
mode='markers',
fill='tozeroy',
)
layout = go.Layout(
annotations=[annotation],
xaxis=dict(
range=[-1, 9]
),
yaxis=dict(
range=[-1, 12]
)
)
trace_data = [trace1]
fig = go.Figure(data=trace_data, layout=layout)
py.iplot(fig, filename='polygon-area')
``` | This is an oversight in the SystemVerilog LRM. There's no syntax to specify a required set of parameters for an interface in a module header.
You might check your synthesis tool to see if they provide any way of specifying parameter overrides for the top-level synthesis instance. |
3859 | I hosted one `DotNetNUke Application` to my production server, and locally it works perfectly. But, when browsing it redirects to the error page.
How do I set the `default.aspx` as my application default page? I am getting the error as below:
```
DotNetNuke Error
--------------------------------------------------------------------------------
Windows Vista
Return to Site
```
Can anyone can help me, please? Will be appreciated.
**UPDATE:**
Hi, I changed the Path in the PortalAlias table to the server url and `default.aspx` as the starting page, but it displays the `error.aspx` as default page.
Please help me resolve this problem.. | That's very easy - build a folder tree based on GUID values parts.
For example, make 256 folders each named after the first byte and only store there files that have a GUID starting with this byte. If that's still too many files in one folder - do the same in each folder for the second byte of the GUID. Add more levels if needed. Search for a file will be very fast.
By selecting the number of bytes you use for each level you can effectively choose the tree structure for your scenario. |
4641 | Lets say, I have Product and Score tables.
```
Product
-------
id
name
Score
-----
id
ProductId
ScoreValue
```
I want to get the top 10 Products with the highest AVERAGE scores, how do I get the average and select the top 10 products in one select statement?
here is mine which selects unexpected rows
```
SELECT TOP 10 Product.ProductName Score.Score
FROM Product, Score
WHERE Product.ID IN (select top 100 productid
from score
group by productid
order by sum(score) desc)
order by Score.Score desc
``` | Give this a try,
```
WITH records
AS
(
SELECT a.ID, a.Name, AVG(b.ScoreValue) avg_score,
DENSE_RANK() OVER (ORDER BY AVG(b.ScoreValue) DESC) rn
FROM Product a
INNER JOIN Score b
ON a.ID = b.ProductID
GROUP BY a.ID, a.Name
)
SELECT ID, Name, Avg_Score
FROM records
WHERE rn <= 10
ORDER BY avg_score DESC
```
The reason why I am not using `TOP` is because it will not handle duplicate record having the highest average. But you can use `TOP WITH TIES` instead. |
5027 | I'm convinced someone else must have had this same issue before but I just can't find anything.
Given a table of data:
```
DECLARE @Table TABLE
(
[COL_NAME] nvarchar(30) NOT NULL,
[COL_AGE] int NOT NULL
);
INSERT INTO @Table
SELECT N'Column 1', 4 UNION ALL
SELECT N'Col2', 2 UNION ALL
SELECT N'Col 3', 56 UNION ALL
SELECT N'Column Four', 8 UNION ALL
SELECT N'Column Number 5', 12 UNION ALL
SELECT N'Column Number Six', 9;
```
If I use SSMS and set my output to text, running this script:
```
SELECT [COL_AGE], [COL_NAME] AS [MyCol] FROM @Table
```
Produces this:
```
COL_AGE MyCol
----------- -----------------
4 Column 1
2 Col2
56 Col 3
8 Column Four
12 Column Number 5
9 Column Number Six
```
Note that the data is neatly formatted and spaced.
I want to display the contents like SQL does when you post your results to text:
```
'Column 1 '
'Col2 '
'Col 3 '
'Column Four '
'Column Number 5 '
'Column Number Six'
```
The following is just to describe what I want, I understand it's obviously a horrible piece of script and should never make its way into production:
```
SELECT
N'''' + LEFT(
[COL_NAME] + SPACE( ( SELECT MAX(LEN([COL_NAME])) FROM @Table ) )
, ( SELECT MAX(LEN([COL_NAME])) FROM @Table )
) + N''''
FROM @Table
```
Originally, I tried this script, which is what I'm trying to get right:
```
SELECT
N'''' + LEFT(
[COL_NAME] + SPACE(MAX(LEN([COL_NAME])))
, MAX(LEN([COL_NAME]))
) + N''''
FROM @Table
```
But it returns the following error:
>
> Msg 8120, Level 16, State 1, Line 28 Column '@Table.COL\_NAME' is
> invalid in the select list because it is not contained in either an
> aggregate function or the GROUP BY clause.
>
>
>
The script is part of a much bigger script and it all has to happen within the SELECT statement, I can't use external variables to first look up the MAX(LEN()) because the bigger script iterates through other tables.
Any help would be appreciated. | I just used a quick CROSS APPLY to get the length of the buffer you want to use:
```
select
N'''' + LEFT(
[COL_NAME] + SPACE( t2.MLEN )
, t2.MLEN
) + N''''
from @Table
CROSS APPLY ( SELECT MAX(LEN([COL_NAME])) MLEN FROM @Table ) t2
``` |
5176 | I trying to deploy an app to Heroku, but when I push my config.ru file I've got errors.
Follow Heroku's log:
```
2013-01-16T21:04:14+00:00 heroku[web.1]: Starting process with command `bundle exec rackup config.ru -p 29160`
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/builder.rb:51:in `initialize'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/bin/rackup:19:in `<main>'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/builder.rb:51:in `instance_eval'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/server.rb:137:in `start'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/config.ru:in `new'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/server.rb:304:in `wrapped_app'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/bin/rackup:4:in `<top (required)>'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/builder.rb:40:in `parse_file'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/server.rb:200:in `app'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/config.ru:in `<main>'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/builder.rb:40:in `eval'
2013-01-16T21:04:16+00:00 app[web.1]: /app/config.ru:1:in `block in <main>': undefined method `require' for #<Rack::Builder:0x0000000281d6a0 @run=nil, @map=nil, @use=[]> (NoMethodError)
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/gems/rack-1.4.4/lib/rack/server.rb:254:in `start'
2013-01-16T21:04:16+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/bin/rackup:19:in `load'
2013-01-16T21:04:17+00:00 heroku[web.1]: State changed from starting to crashed
2013-01-16T21:04:17+00:00 heroku[web.1]: Process exited with status 1
2013-01-16T21:04:18+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=mazzocato.herokuapp.com fwd=201.95.41.116 dyno= queue= wait= connect= service= status=503 bytes=
2013-01-16T21:04:19+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=mazzocato.herokuapp.com fwd=201.95.41.116 dyno= queue= wait= connect= service= status=503 bytes=
2013-01-16T21:04:20+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=mazzocato.herokuapp.com fwd=201.95.41.116 dyno= queue= wait= connect= service= status=503 bytes=
2013-01-16T21:04:37+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=mazzocato.herokuapp.com fwd=201.95.41.116 dyno= queue= wait= connect= service= status=503 bytes=
2013-01-16T21:04:06+00:00 heroku[web.1]: Unidling
```
Follow my config.ru file:
```
require './app'
run Sinatra::Application
```
my main file is `app.rb`
Any help? | Try this:
```
<A.*HREF\s*=\s*(?:"|')([^"']*)(?:"|').*>(.*)<\/A>
```
Group1 and Group2 will give you the desired result. |
5396 | Why does block with text shift to the bottom? I know how to fix this issue (need to add "overflow: hidden" to the box), but I don't understand why it shift to the bottom, text inside the box is short, margins in browser-inspector are same as margins of example without text.
[Example of the problem](https://codepen.io/AzatKaumov/pen/ZKQpOR/)
HTML:
```
<div class="with-text">
<div class="box1">
SIMPLE TEXT
</div>
<div class="box2">
</div>
</div>
<div class="without-text">
<div class="box1">
</div>
<div class="box2">
</div>
</div>
```
CSS:
```
html, body {
font-size: 10px;
margin: 0;
height: 100%;
}
.box1 {
display: inline-block;
margin: 5px;
width: 50px;
height: 50px;
background: blue;
/* Fix the problem */
/* overflow: hidden; */
color: white;
}
.box2 {
display: inline-block;
margin: 5px;
width: 50px;
height: 50px;
background: red;
}
.with-text:before {
display: block;
content: "with-text";
text-transform: uppercase;
margin: 1rem;
}
.with-text {
box-sizing: border-box;
height: 50%;
border: 1px solid;
}
.without-text:before {
display: block;
content: "without text";
text-transform: uppercase;
margin: 1rem;
}
.without-text {
box-sizing: border-box;
height: 50%;
border: 2px solid black;
}
``` | The problem is that by default vertical alignment of **inline elements** – **baseline**,
The text inside element affects it and pushes div to the bottom.
Use `vertical-align: top` to solve issue. |
5614 | I'm having some trouble with a problem in linear algebra:
Let $A$ be a matrix with dimensions $m \times n$ and $B$ also a matrix but with dimensions $n \times m$ which is **not** a null matrix. (That's all that's written - I assume A may or may not be a null matrix).
Given that $AB=0$:
1. Prove there is a non-trivial solution to the system of equations $Ax=0$
2. Assume $A\neq0$ . Does the system $Bx=0$ also have a non-trivial solution? If so, prove the argument. If not, provide a contradictory example.
There's a third part to the question but I managed to solve it and its content isn't really relevant here because it provided us a defined $A$ of real numbers, but I'm pretty lost with the first two arguments - I'm having trouble putting what I think into words. Can anyone help with this? Thanks!
EDIT:
Okay so I think I'm supposed to deal with the different cases of $m$ and $n$:
If $n > m$ obviously the system $Ax=0$ has infinite solutions because we'll have more variables than equations.
What I haven't quite figured out is how to prove that:
If $AB=0$ and $m=n$ or $m > n$, then it immediately follows that $Rank(A) < n$ .
Any help with this would be greatly appreciated. | Solved it.
1. If $AB=0$ that means that matrix $A$ multiplied by any column vector $b$ in $B$ will be equal to the zero vector. Since we know that $B\neq 0$, there must be at least one column vector $b$ in $B$ that **isn't** the zero vector. So to summarize, since $A$ multiplied by any column vector in $B$ returns 0 and we know there is a non-zero column vector in $B$, the system of equations $Ax=0$ has at least one non-trivial solution, where $x$ can be the zero vector or a non-zero vector from $B$.
2. I have found a contradictory example. Basically to disprove the argument I need to find matrices $A,B$ that meet the following criteria:
* $A\_{m\times n}, B\_{n\times m}$
* $A,B\neq0$
* $Bx=0$ **only** has a trivial solution
Here they are:
$$
A=\begin{bmatrix}
0&0&0&1\\
0&0&0&0\\
0&0&0&0
\end{bmatrix}\_{3\times 4}\ ,\
B=
\begin{bmatrix}
1&0&0\\
0&1&0\\
0&0&1\\
0&0&0
\end{bmatrix}
\\
$$
$$
A\_{m\times n}, B\_{n\times m}\ \ \checkmark
\\
A,B\neq 0\ \ \checkmark
\\
AB=0\ \ \checkmark
\\
Bx=0\rightarrow\ one\ solution\ \ \checkmark
$$
And there's a perfect contradictory example to the argument. |
5615 | I have a gridview. its data source is a datatable that is loaded from the database. In this gridview, i have a template column.
```
<asp:TemplateField HeaderText="Product Type" SortExpression="ProductID">
<ItemStyle CssClass="MP-table-tb-display-item" />
<ItemTemplate>
<div class="MP-table-tb-display-main">
<asp:LinkButton ID="lnkview" CommandArgument='<%# Eval("ProductID") %>' CommandName="Viewproduct"
runat="server" CausesValidation="False" OnClick="lnkview_Click"><h4>
<%# Eval("Name") %>
</h4>
</asp:LinkButton>
</div>
<br />
<div class="MP-table-tb-display">
<p>
<span>KEY</span><%# Eval("[product_type_key]") %></p>
<br />
<a target="_blank" href='<%# Eval("SourceURL") %>'>Source</a>
</div>
</ItemTemplate>
</asp:TemplateField>
```
In this I want Source hyperlink only show when data available into `<%# Eval("SourceURL") %>`. If I am not able to get the SourceURL value into `RowDatabound Event` . Please Guide me.
I plan for this too but this is not working properly.
```
<a target="_blank" href=' <%= Eval("SourceURL")!=null ? Eval("SourceURL") : "style='display: none'" %> />'> Source</a>
``` | use this instead
```
<asp:hyperlink Target="_blank" NavigateUrl='<%# Eval("SourceURL") %>' Visible = '<%# Eval("SourceURL") == null ? false : true %>' >
```
Similarly you could use the `<a>` tag to control its visiblity. The if condition would go in Style attribue and not in href attribute. Something like this
```
Style=display:Eval('some_val') == null ? none : block
``` |
5730 | my code broke somewhere along the way, and crashes when using the navigation bar buttons.
Error message:
`*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[UIView newMemoViewController:didAddMemo:]: unrecognized selector sent to instance 0x5b55a60'`
When debugging, the program does run the `cancel` method, and throws an exception at the `@synthesize` line. However, I cannot see anything wrong with it.
The symptoms are identical, so I am including the relevant code only for the `Cancel` button:
**NewMemoViewController.h**
```
#import <UIKit/UIKit.h>
@protocol NewMemoDelegate;
@class AKVoiceMemo;
@interface NewMemoViewController : UIViewController {
@private
AKVoiceMemo *voiceMemo;
id <NewMemoDelegate> delegate;
}
@property (nonatomic, retain) AKVoiceMemo *voiceMemo;
@property (nonatomic, assign) id <NewMemoDelegate> delegate;
@end
@protocol NewMemoDelegate <NSObject>
- (void)newMemoViewController:(NewMemoViewController *)newMemoViewController didAddMemo:(AKVoiceMemo *)voiceMemo;
@end
```
**NewMemoViewController.m**
```
#import "NewMemoViewController.h"
@synthesize delegate;
- (void)viewDidLoad {
UIBarButtonItem *cancelButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Cancel" style:UIBarButtonItemStyleBordered target:self action:@selector(cancel)];
self.navigationItem.leftBarButtonItem = cancelButtonItem;
[cancelButtonItem release];
}
- (void)cancel {
[self.delegate newMemoViewController:self didAddMemo:nil];
}
```
Your help would be appreciated.
**Edit**: the delegate is the `RootViewController`:
```
- (void)newMemoViewController:(NewMemoViewController *)newMemoViewController didAddMemo:(AKVoiceMemo *)voiceMemo {
if (voiceMemo){
// Show the note in a new view controller
// TODO: Implement this
}
[self dismissModalViewControllerAnimated:YES];
}
``` | You're probably setting the delegate of `NewMemoViewController` to a `UIView` object instead of an object that implements the `NewMemoDelegate` protocol.
The error message is telling you that a `newMemoViewController:didAddMemo:` message was sent to a `UIView` object and the `UIView` object didn't know what to do with it. Since your `cancel` method calls `newMemoViewController:didAddMemo:` on the delegate, it is the *delegate* which is the `UIView` object that doesn't recognize the `newMemoViewController:didAddMemo:` message. In other words, your delegate is a `UIView` and it doesn't implement the `NewMemoDelegate` protocol.
If you are correctly setting the delegate, then @jtbandes makes a great point: The delegate is probably being released and a `UIView` object is taking over the same memory location, thus "becoming" the delegate by accident. You're doing the right thing by using the `assign` attribute for your delegate; that's fairly standard Cocoa practice. However, you do need to make sure that the delegate is retained by another object, and *that* object needs to make sure that the delegate sticks around as long as `NewMemoViewController` needs it to. |
5828 | To explain my problem more easily I will create the following fictional example, illustrating a very basic many-to-many relationship. A **Car** can have many **Parts**, and a **Part** can belong to many **Cars**.
**DB SCHEMA:**
```
CAR_TABLE
---------
CarId
ModelName
CAR_PARTS_TABLE
---------------
CarId
PartId
PARTS_TABLE
-----------
PartId
PartName
```
**CLASSES:**
```
public class Car
{
public int CarId {get;set;}
public string Name {get;set;}
public IEnumerable<Part> Parts {get;set;}
}
public class Part
{
public int PartId {get;set;}
public string Name {get;set}
}
```
Using this very simple model I would like to get any cars that have all the parts assigned to them from a list of parts I am searching on.
So say I have an array of PartIds:
```
var partIds = new [] { 1, 3, 10};
```
I want to mimic the following c# code in terms of a database call:
```
var allCars = /* code to retrieve all cars */
var results = new List<Car>();
foreach (var car in allCars)
{
var containsAllParts = true;
foreach (var carPart in car.Parts)
{
if (false == partIds.Contains(carPart.PartId))
{
containsAllParts = false;
break;
}
}
if (containsAllParts)
{
results.Add(car);
}
}
return results;
```
To be clear: I want to get the Cars that have ALL of the Parts specified from the partIds array.
I have the following query, which is REALLY inefficient as it creates a subquery for each id within the partIds array and then does an IsIn query on each of their results. I am desperate to find a much more efficient manner to execute this query.
```
Car carAlias = null;
Part partAlias = null;
var searchCriteria = session.QueryOver<Car>(() => carAlias);
foreach (var partId in partIds)
{
var carsWithPartCriteria = QueryOver.Of<Car>(() => carAlias)
.JoinAlias(() => carAlias.Parts, () => partAlias)
.Where(() => partAlias.PartId == partId)
.Select(Projections.Distinct(Projections.Id()));
searchCriteria = searchCriteria
.And(Subqueries.WhereProperty(() => carAlias.Id).In(carsWithPartCriteria));
}
var results = searchCriteria.List<Car>();
```
Is there a decent way to execute this sort of query using NHibernate? | ```
Part partAlias=null;
Session.QueryOver<Car>().JoinQueryOver(x=>x.Parts,()=>partAlias)
.WhereRestrictionOn(()=>partAlias.Id).IsIn(partIds) //partIds should be implement an ICollection
.List<Car>();
```
Hope that helps. |
7116 | ```
public Void traverseQuickestRoute(){ // Void return-type from interface
findShortCutThroughWoods()
.map(WoodsShortCut::getTerrainDifficulty)
.ifPresent(this::walkThroughForestPath) // return in this case
if(isBikePresent()){
return cycleQuickestRoute()
}
....
}
```
Is there a way to exit the method at the `ifPresent`?
In case it is not possible, for other people with similar use-cases: I see two alternatives
```
Optional<MappedRoute> woodsShortCut = findShortCutThroughWoods();
if(woodsShortCut.isPresent()){
TerrainDifficulty terrainDifficulty = woodsShortCut.get().getTerrainDifficulty();
return walkThroughForrestPath(terrainDifficulty);
}
```
This feels more ugly than it needs to be and combines if/else with functional programming.
A chain of `orElseGet(...)` throughout the method does not look as nice, but is also a possibility. | `return` is a control statement. Neither lambdas (arrow notation), nor method refs (`WoodsShortcut::getTerrainDifficulty`) support the idea of control statements that move control to outside of themselves.
Thus, the answer is a rather trivial: Nope.
You have to think of the stream 'pipeline' as the thing you're working on. So, the question could be said differently: Can I instead change this code so that I can modify how this one pipeline operation works (everything starting at `findShortCut()` to the semicolon at the end of all the method invokes you do on the stream/optional), and then make this one pipeline operation the whole method.
Thus, the answer is: **`orElseGet` is probably it.**
Disappointing, perhaps. 'functional' does not strike me as the right answer here. The problem is, there are things for/if/while loops can do that 'functional' cannot do. So, if you are faced with a problem that is simpler to tackle using 'a thing that for/if/while is good at but functional is bad at', then it is probably a better plan to just use for/if/while then.
One of the core things lambdas can't do are about the transparencies. Lambdas are non-transparant in regards to these 3:
* Checked exception throwing. `try { list.forEach(x -> throw new IOException()); } catch (IOException e) {}` isn't legal even though your human brain can trivially tell it should be fine.
* (Mutable) local variables. `int x = 5; list.forEach(y -> x += y);` does not work. Often there are ways around this (`list.mapToInt(Integer::intValue).sum()` in this example), but not always.
* Control flow. `list.forEach(y -> {if (y < 0) return y;});` does not work.
So, keep in mind, you really have only 2 options:
* Continually retrain yourself to not think in terms of such control flow. You find `orElseGet` 'not as nice'. I concur, but if you really want to blanket apply functional to as many places as you can possibly apply it, the whole notion of control flow out of a lambda needs not be your go-to plan, and you definitely can't keep thinking 'this code is not particularly nice because it would be simpler if I could control flow out', you're going to be depressed all day programming in this style. The day you never even think about it anymore is the day you have succeeded in retraining yourself to 'think more functional', so to speak.
* Stop thinking that 'functional is always better'. Given that there are so many situations where their downsides are so significant, perhaps it is not a good idea to pre-suppose that the lambda/methodref based solution must somehow be superior. Apply what seems correct. That should often be "Actually just a plain old for loop is fine. Better than fine; it's the right, most elegant1 answer here".
[1] "This code is elegant" is, of course, a non-falsifiable statement. It's like saying "The Mona Lisa is a pretty painting". You can't make a logical argument to prove this and it is insanity to try. "This code is elegant" boils down to saying "*I* think it is prettier", it cannot boil down to an objective fact. That also means in team situations there's no point in debating such things. Either everybody gets to decide what 'elegant' is (hold a poll, maybe?), or you install a dictator that decrees what elegance is. If you want to fix that and have meaningful debate, the term 'elegant' needs to be defined in terms of objective, falsifiable statements. I would posit that things like:
* in face of expectable future change requests, this style is easier to modify
* A casual glance at code leaves a first impression. Whichever style has the property that this first impression is accurate - is better (in other words, code that confuses or misleads the casual glancer is bad). Said even more differently: Code that really needs comments to avoid confusion is worse than code that is self-evident.
* this code looks familiar to a wide array of java programmers
* this code consists of fewer AST nodes (the more accurate from of 'fewer lines = better')
* this code has simpler semantic hierarchy (i.e. fewer indents)
Those are the kinds of things that should *define* 'elegance'. Under almost all of those definitions, 'an `if` statement' is as good or better in this specific case! |
7738 | So i have `.cont` that's centered in using position absolute and is height 80% of body.
Inside it there are two divs. One is fixed height. And other one needs to expand to rest of parent, `.cont`.
So How do i make it expand to parent.
One other requirement is that content in both of these needs to be vertically and horizontally centered.
```
body
.cont
.top
.fillRest
```
Here is jsfiddle: <http://jsfiddle.net/24jocwu5/>
1. make .fillRest Expand to rest of .cont.
2. vertically and Horizontally center h1 headings in both divs.
Don't Use calc()
can use display table, flow, position, and other tricks.
![enter image description here](https://i.stack.imgur.com/k68V6.png) | Here you go. Absolutely position the white container with a top-padding that equals the height of your fixed-height top div. Then give the top div a z-index so it goes over your white box:
Fiddle - <http://jsfiddle.net/24jocwu5/2/>
```
* {margin: 0; padding: 0;}
html, body {
height: 100%;
background-color: #3dd;
color: #aaa;
font-family: helvetica;
}
.cont {
position: absolute;
top: 0; bottom: 0;
right: 0; left: 0;
background-color: #1af;
width: 400px;
margin: auto;
height: 80%;
}
.top {
height: 100px;
background-color: pink;
position: relative;
z-index: 1;
}
.fillRest {
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
padding-top: 100px;
height: 100%;
background-color: #fff;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
h1 {
text-align: center;
width: 200px;
margin: auto;
background-color: #eee;
}
``` |
8181 | I'm trying to translate a simple button hover example to emca6 (I'm using babel) and keep failing. I guess that my bind is wrong somehow but I'm new to jscript and don't completely understand the:
`
```
constructor(props) {
super(props);`
```
I mean I get that it's like super in python, but why the weird syntax with props passed as parameter?
```
/* non emca6 */
import React from 'react'
var HoverButton = React.createClass({
getInitialState: function () {
return {hover: false};
},
mouseOver: function () {
this.setState({hover: true});
},
mouseOut: function () {
this.setState({hover: false});
},
render: function() {
var label = "foo";
if (this.state.hover) {
label = "bar";
}
return React.createElement(
"button",
{onMouseOver: this.mouseOver, onMouseOut: this.mouseOut},
label
);
}
});
React.render(React.createElement(HoverButton, null), document.body);
export default HoverButton;
/* emca6 */
import React from 'react';
class HoverButton extends React.Component {
constructor(props) {
super(props);
this.state = ({hover: false});
this.mouseOver = this.mouseOver.bind(this);
this.mouseOut = this.mouseOut.bind(this);
}
mouseOver(){
this.setState = ({hover: true});
}
mouseOut(){
this.setState = ({hover: false});
}
render() {
var label = "idle";
if (this.state.hover) {
label = "active";
}
return React.createElement(
"button",
{onMouseEnter: this.mouseOver, onMouseOut: this.mouseOut},
label,
);
}
}
export default HoverButton;
``` | Josh,
Sorry your experience had been frustrating. I'm not at a computer at the moment but wanted to see if I could provide some help and will try to repro your scenario as soon as I can.
* All types that compose your function (either in a single or multiple .csx files) are compiled into a single assembly. So the error coming from EF is indeed puzzling.
* App.config is not supported in functions.
One approach you can try in the meantime (until we can find what is causing the issue you're running into), is to deploy your EF types and POCO as a separate assembly that you can then reference from your function. To do so, just copy that assembly into a bin folder under your function folder and add `#r "YourAssemblyName.dll"` to the top of your function file. Hopefully this will unblock you.
I'll post an update when I'm able to repro your scenario and have more information. |
8369 | I have a twist on a common question I've seen in here, and I'm puzzled.
What I need is simply a dialog box for each sub item of a list item. I have seen a dialog for a list item, but I need it down to the list item's item. Currently I've tried doing that within the adapter when inside the getView() method.
For example:
```
@Override
public View getView(int position, View convertView, ViewGroup parent)
{
View v = convertView;
if (v == null)
{
LayoutInflater li = (LayoutInflater) _context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
v = li.inflate(_resourceId, null);
}
string description = "howdy Test";
TextView description = (TextView) v.findViewById(R.id.description);
description.setText(description );
description.setOnClickListener(new View.OnClickListener()
{
public void onClick(View view)
{
AlertDialog.Builder dia = new AlertDialog.Builder(view.getContext());
dia.setTitle(view.getContext().getResources().getString(R.string.DESCRIPTION_TITLE));
dia.create();
}
});
}
```
With that example above, it does go into the onClick() method, but nothing happens with the AlertDialog. Has anyone else tried this? is there a better way? Even better what am I doing wrong?
Thanks,
Kelly | You have to call the `show()` method on your **dia** object.[Link here to the android docs!](http://developer.android.com/reference/android/app/AlertDialog.Builder.html#show%28%29) |
8880 | I am relatively new to batch scripting particularly in a windows environment. I would like to be able to gather the HDD information about a specific machine through the following command:
```
wmic idecontroller
```
However when I run that command, the output that I recieve looks like this:
```
Availability Caption ConfigManagerErrorCode ConfigManagerUserConfig CreationClassName Description DeviceID ErrorCleared ErrorDescription InstallDate LastErrorCode Manufacturer MaxNumberControlled Name PNPDeviceID PowerManagementCapabilities PowerManagementSupported ProtocolSupported Status StatusInfo SystemCreationClassName SystemName TimeOfLastReset
ATA Channel 0 0 FALSE Win32_IDEController IDE Channel PCIIDE\IDECHANNEL\4&160FD31B&0&0 (Standard IDE ATA/ATAPI controllers) ATA Channel 0 PCIIDE\IDECHANNEL\4&160FD31B&0&0 37 OK Win32_ComputerSystem TEST
ATA Channel 3 0 FALSE Win32_IDEController IDE Channel PCIIDE\IDECHANNEL\4&160FD31B&0&3 (Standard IDE ATA/ATAPI controllers) ATA Channel 3 PCIIDE\IDECHANNEL\4&160FD31B&0&3 37 OK Win32_ComputerSystem TEST
ATA Channel 4 0 FALSE Win32_IDEController IDE Channel PCIIDE\IDECHANNEL\4&160FD31B&0&4 (Standard IDE ATA/ATAPI controllers) ATA Channel 4 PCIIDE\IDECHANNEL\4&160FD31B&0&4 37 OK Win32_ComputerSystem TEST
ATA Channel 5 0 FALSE Win32_IDEController IDE Channel PCIIDE\IDECHANNEL\4&160FD31B&0&5 (Standard IDE ATA/ATAPI controllers) ATA Channel 5 PCIIDE\IDECHANNEL\4&160FD31B&0&5 37 OK Win32_ComputerSystem TEST
Intel(R) 6 Series/C200 Series Chipset Family 6 Port SATA AHCI Controller - 1C03 0 FALSE Win32_IDEController Intel(R) 6 Series/C200 Series Chipset Family 6 Port SATA AHCI Controller - 1C03 PCI\VEN_8086&DEV_1C03&SUBSYS_04A31028&REV_04\3&11583659&0&FA Intel Intel(R) 6 Series/C200 Series Chipset Family 6 Port SATA AHCI Controller - 1C03 PCI\VEN_8086&DEV_1C03&SUBSYS_04A31028&REV_04\3&11583659&0&FA 37 OK Win32_ComputerSystem TEST
```
If I wanted to only gather information from a specific column, and store each of those strings into a variable, what would be the best method? For example, if I wanted to store all of the fields under "Description" to an array of strings! | Here you go. Batch doesn't have arrays per se, but you can duplicate an array like this:
```
@echo off
setlocal enabledelayedexpansion
set cnt=0
for /f "tokens=2 delims==" %%a in ('wmic idecontroller get description /value^| Find "="') do (
set /a cnt+=1
set Ide[!cnt!]=%%a
)
for /L %%a in (1,1,%cnt%) do echo !Ide[%%a]!
``` |
9658 | I'm want to add **OR** condition in the JSON query of Cube.js. But once I added one more condition in the filter it always adds **AND** condition in SQL query.
Below is the JSON query that I'm trying.
```
{
"dimensions": [
"Employee.name",
"Employee.company"
],
"timeDimensions": [],
"measures": [],
"filters": [
{
"dimension": "Employee.company",
"operator": "contains",
"values": [
"soft"
]
},
{
"dimension": "Employee.name",
"operator": "contains",
"values": [
"soft"
]
}
]
}
```
It generates below SQL query.
```sql
SELECT
`employee`.name `employee__name`,
`employee`.company `employee__company`
FROM
DEMO.Employee AS `employee`
WHERE
`employee`.company LIKE CONCAT('%', 'soft', '%')
AND
`employee`.name LIKE CONCAT('%', 'soft', '%')
GROUP BY
1,
2;
```
What is the JSON query for Cube.js if I want to generate below SQL
```sql
SELECT
`employee`.name `employee__name`,
`employee`.company `employee__company`
FROM
DEMO.Employee AS `employee`
WHERE
`employee`.company LIKE CONCAT('%', 'soft', '%')
OR
`employee`.name LIKE CONCAT('%', 'soft', '%')
GROUP BY
1,
2;
``` | API support for logical operators isn't shipped yet. Meanwhile there're several workarounds:
1. Define dimension that mimics **OR** behavior. In your case it's
```js
cube(`Employee`, {
// ...
dimensions: {
companyAndName: {
sql: `CONCAT(${company}, ' ', ${name})`,
type: `string`
}
}
});
```
2. Define segments. Those can be also generated: <https://cube.dev/docs/schema-generation>
```js
cube(`Employee`, {
// ...
segments: {
soft: {
sql: `${company} LIKE CONCAT('%', 'soft', '%') OR ${name} LIKE CONCAT('%', 'soft', '%')`
}
}
});
``` |
9810 | Anyone point out the issue?
Keep getting "The specified type member 'Date' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported."
```
public IEnumerable<Appointment> FindAllAppointmentsWithReminders()
{
DateTime reminderDate = DateTime.Today.Date;
IEnumerable<Appointment> apps = RepositorySet
.OfType<Appointment>()
.Include("Client")
.Where(c => EntityFunctions.TruncateTime(c.Client.Reminder.Date) == reminderDate.Date
&& reminderDate.Date > EntityFunctions.TruncateTime(c.StartTime.Date));
return apps;
}
``` | Remove all the `.Date` from your method but this:
```
DateTime reminderDate = DateTime.Today.Date;
```
EntityFramework doesn't support the `.Date` property of `Datetime`. For this reason there is the pseudo-function `EntityFunctions.TruncateTime`, and for the `reminderDate` you already remove the time in the `DateTime reminderDate = DateTime.Today.Date`.
```
public IEnumerable<Appointment> FindAllAppointmentsWithReminders()
{
DateTime reminderDate = DateTime.Today.Date;
IEnumerable<Appointment> apps = RepositorySet
.OfType<Appointment>()
.Include("Client")
.Where(c => EntityFunctions.TruncateTime(c.Client.Reminder) == reminderDate
&& reminderDate > EntityFunctions.TruncateTime(c.StartTime));
return apps;
}
``` |
10924 | I would like to get the value for of a hiddenfield if a checkbox is checked on my gridview
my gridview:
```
<asp:GridView ID="gv_enfant" runat="server" AutoGenerateColumns="False" BackColor="White"
BorderColor="#CCCCCC" BorderStyle="None" BorderWidth="1px" CellPadding="3" DataSourceID="SqlDataSource1"
Width="533px">
<Columns>
<asp:TemplateField HeaderText="Select">
<ItemTemplate>
<asp:CheckBox ID="CheckBoxenfant" runat="server" />
<asp:HiddenField ID="codeenfant" runat="server" Value='<%# Eval("codeEnfants") %>' />
</ItemTemplate>
..............
</asp:GridView>
```
and here is how I loop trhough the rows and check:
```
string myid = string.Empty;
for (int i = 0; i < gv_enfant.Rows.Count; i++)
{
CheckBox chbox = (CheckBox)gv_enfant.Rows[i].Cells[0].FindControl("CheckBoxenfant");
if (chbox.Checked)
{
myid = ((HiddenField)gv_enfant.Rows[i].Cells[0].FindControl("codeenfant")).Value;
}
}
```
I put a breakpoint on the condition, debugger never hit that line | Late answer, but if it still helps (or for anyone else) you should be able to do the following with SQL 2008.
```
DECLARE @point GEOMETRY = GEOMETRY::STPointFromText('POINT(0 0)', 0);
DECLARE @line GEOMETRY = GEOMETRY::STLineFromText('POINT(10 10, 20 20)', 0);
SELECT STIntersection(@point.STBuffer(@point.STDistance(@line)));
```
Essentially, you calculate the distance between the two geometries,use that as a buffer on the point which should result in the geometries touching, and take the intersection (point) of that. |
11052 | I am currently using apTreeshape to simulate phylogenetic trees using the "Yule-Hardy" Method. What I want to do is randomly generate between 20 and 25 different numbers for three different groupings (small, medium and large trees) and then generate about 40 trees for every random number chosen from within the grouping.
I know how I would do this in Python of Matlab, but in R things seem to behave a bit differently.
My thought was that if I were to generate a vector full of random numbers (one for each size grouping) and then use that to generate a vector which would basically contain all of the repeated values of each random number.
Here is what I have:
```
sm_leaves<-c(sample(3:50,25,replace=F));
s_leafy<-numeric();
for (i in 1:length(sm_leaves)) {
for (j in 1:10) {
s_leafy[j+i-1]=sm_leaves[i];
}
}
```
This is giving me output like:
```
> s_leafy
[1] 5 38 6 22 29 20 19 46 9 18 39 50 34 11 43 7 8 32 10 42 14 37
[23] 23 13 28 28 28 28 28 28 28 28 28 28
```
But What I want is something more like:
```
> s_leafy
[1] 5 5 5 5 5 5 5 5 5 5 38 38 38 38 38 38 38 38 38 ... 28 28 28 28 28 28 28 28 28 28
```
My reason for doing this is merely so that I can append this vector to a data frame along with all of the randomly generated trees - I need 2000 of them, so doing this by hand ain't quite practical.
All I have really been able to deduce from my previous attempts to solve this problem is that generally speaking while loops should be used instead of for loops, and many people have talked about using expand.grid, but I don't think that the latter is particularly useful in this case.
Thanks for reading, I hope my problem isn't too trivial (although I wouldn't be surprised if it were). | Apologies if I don't quite understand the question, but what about:
```
sm_leaves <- sample(3:50, 25, replace=FALSE)
s_leafy <- rep(sm_leaves, each=10)
``` |
11230 | The following query returns
```
select to_char( trunc(sysdate) - numtoyminterval(level - 1, 'month'), 'mon-yy') as month from dual connect by level <= 12
```
last 12 months according to today's date(i.e. 2-Jan-18).
[![enter image description here](https://i.stack.imgur.com/Sd9ZQ.png)](https://i.stack.imgur.com/Sd9ZQ.png)
Say if today's date is 29-DEC-17 it gives oracle sql error:
**ORA-01839: date not valid for month specified**
(since on subtracting there would be a date in the result as **'29-FEB-17'** which is not possible). So on specific dates this error would pop-up. How do you suggest to overcome this? | `char` is `signed` on your platform.
If you use `unsigned char` for your types for `c2` and `c1` then the implicit promotion to `int` for each term in your expression will have the effect you are after. |
11310 | Background:
* I have a short list of strings.
* The number of strings is not always the same, but are nearly always of the order of a “handful”
* In our database will store these strings in a 2nd normalised table
* These strings are **never** changed once they are written to the database.
We wish to be able to match on these strings quickly in a query without the performance hit of doing lots of joins.
So I am thinking of storing a hash code of all these strings in the main table and including it in our index, so the joins are only processed by the database when the hash code matches.
So how do I get a good hashcode? I could:
* Xor the hash codes of all the string together
* Xor with multiply the result after each string (say by 31)
* Cat all the string together then get the hashcode
* Some other way
So what do people think?
---
In the end I just concatenate the strings and compute the hashcode for the concatenation, as it is simple and worked well enough.
*(If you care we are using .NET and SqlServer)*
---
**Bug!, Bug!**
[Quoting from Guidelines and rules for GetHashCode](http://ericlippert.com/2011/02/28/guidelines-and-rules-for-gethashcode/) by Eric Lippert
>
> The documentation for
> System.String.GetHashCode notes
> specifically that two identical
> strings can have different hash codes
> in different versions of the CLR, and
> in fact they do. Don't store string
> hashes in databases and expect them to
> be the same forever, because they
> won't be.
>
>
>
So String.GetHashcode() should not be used for this. | A SQL-based solution could be based on the checksum and checksum\_agg functions. If I'm following it right, you have something like:
```
MyTable
MyTableId
HashCode
MyChildTable
MyTableId (foreign key into MyTable)
String
```
with the various strings for a given item (MyTableId) stored in MyChildTable. To calculate and store a checksum reflecting these (never-to-be-changed) strings, something like this should work:
```
UPDATE MyTable
set HashCode = checksum_agg(checksum(string))
from MyTable mt
inner join MyChildTable ct
on ct.MyTableId = mt.MyTableId
where mt.MyTableId = @OnlyForThisOne
```
I believe this is order-independant, so strings "The quick brown" would produce the same checksum as "brown The quick". |
11363 | So I found this effect and I'm trying to modify it to be loaded inside a DIV `myeffect`, for example:
```
<html>
<head></head>
<body>
<div class="myeffect"></div>
</body>
</html>
```
I tried changing some variables but I'm not a JavaScript expert and I can't get it to work inside a DIC. The effect covers the whole screen from top to bottom.
The code is on Codepen and can be found here: <https://codepen.io/emilykarp/pen/bVqxRm>
Help is welcome. | Hope this helps
```js
var speeds = [];
var count = 1;
var colors = ['#bf1e2e', '#ee4037', '#dc5323', '#e1861b', '#e1921e', '#f7ac40', '#f7e930', '#d1da22', '#8bc43f', '#38b349', '#008d42', '#006738', '#29b473', '#00a69c', '#26a9e1', '#1a75bb', '#2a388f', '#262161', '#652d90', '#8e2792', '#9e1f64', '#d91c5c', '#ed297b', '#d91c5c', '#db1e5e', '#bf1e2e', '#f6931e', '#f05a28', '#f6931e', '#fbaf41']
var width = parseInt($('html').css('width'), 10);
var random = function(mult, add) {
return Math.floor((Math.random()*mult) + add);
};
var drop = function(n, height, color) {
$('.myeffect').append('<div class="drop" style="left:'+
n*15+'px;height:'+
height+'vh;background-color:'+
color+';"></div>');
};
var createDrops = function(space) {
for (var i=speeds.length; i < space/10; i++) {
speeds.push(random(3000, 2000));
drop(i, random(70, 30), colors[count]);
if (count < colors.length-1) { count++; }
else { count = 0; }
}
};
var animateDrops = function(startingN) {
for (var i=startingN; i<speeds.length; i++) {
$('.drop:nth-child('+i+')').slideDown(speeds[i]);
}
};
createDrops(width);
animateDrops(0);
```
```css
.drop {
width: 16px;
height: 200px;
display: none;
border-bottom-left-radius: 10px;
border-bottom-right-radius: 10px;
position: absolute;
top: 0;
-webkit-box-shadow: inset -4px -8px 16px -6px rgba(0,0,0,0.47);
-moz-box-shadow: inset -4px -8px 16px -6px rgba(0,0,0,0.47);
box-shadow: inset -4px -8px 16px -6px rgba(0,0,0,0.47);
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="myeffect" ></div>
``` |
11423 | I created the aks cluster with azure service principal id and i provided the contributer role according to the subscription and resource group.
For each and every time when i executed the pipeline the sign-in is asking and after i authenticated it is getting the data.
Also the "kubectl get" task is taking more than 30 min and is getting "Kubectl Server Version: Could not find kubectl server version"
*To sign in, use a web browser to open the page <https://microsoft.com/devicelogin> and enter the code CRA2XssWEXUUA to authenticate*
Thanks in advance
[![enter image description here](https://i.stack.imgur.com/wxXk9.png)](https://i.stack.imgur.com/wxXk9.png) | The easiest solution may be to flatten the different split characters to a single one:
```py
with open("example.dat", "r") as fh:
lines = []
for line in fh:
lines.append( line.strip().replace("[", ",").replace("]", ",").split(",") )
``` |
12222 | I am developing a taskmanager on Android 2.1.
I want to reset date and time on clicking the reset button to current date and time.
Help me with the code..
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final EditText next = (EditText) findViewById(R.id.editText1);
final Button sub = (Button) findViewById(R.id.button1);
final Button res = (Button) findViewById(R.id.button2);
final DatePicker dp= (DatePicker) findViewById(R.id.datePicker1);
final TimePicker tp = (TimePicker) findViewById(R.id.timePicker1);
res.setOnClickListener(new View.OnClickListener() {
public void onClick(final View view) {
next.setText("");
dp.refreshDrawableState();
}
});
}}
``` | ```
//get current time
Time now = new Time();
now.setToNow();
//update the TimePicker
tp.setHour(now.hour);
tp.setMinute(now.minute);
//update the DatePicker
dp.updateDate(now.year, now.month, now.monthDay);
``` |
12227 | Is it possible to validate a field only when it exists?
I'm going to create an application. A field(email) may / may not display on step 2, it depends on the step 1's selected option. The problem is that I can't put `email: Yup.string().Required()`, it causes that the validation run all the time even the field is not displayed. | I saw another related answer here, maybe this is a valid solution too?
<https://github.com/jquense/yup/issues/1114>
Copying code from there:
```
const schema = yup.object({
phone: yup.string().when('$exist', {
is: exist => exist,
then: yup.string().required(),
otherwise: yup.string()
})
})
``` |
12340 | I have a mysql query.
The idea is to select the records between a date range. The dates are stored as unix timestamps. With the query below, I end up with far more records than I should (out of the date range).
I have picked my brain and I cant see where the query is going wrong. The other fields look correct, its just that I am out of the desired date range.
```
SELECT
mdl_user_info_data.data,
mdl_user.firstname,
mdl_user.lastname,
mdl_grade_grades.itemid,
mdl_grade_items.itemname,
mdl_quiz.fcpd,
mdl_user_info_data.id,
mdl_grade_grades.timecreated AS DATE
FROM mdl_grade_grades
INNER JOIN mdl_user ON mdl_grade_grades.userid = mdl_user.id
INNER JOIN mdl_grade_items ON mdl_grade_grades.itemid = mdl_grade_items.id
INNER JOIN mdl_quiz ON mdl_grade_items.courseid = mdl_quiz.course
INNER JOIN mdl_user_info_data ON mdl_user.id = mdl_user_info_data.userid
INNER JOIN mdl_course ON mdl_grade_items.courseid = mdl_course.id
WHERE mdl_grade_grades.timecreated BETWEEN (FROM_UNIXTIME(1371704400) AND FROM_UNIXTIME(1371790800))
AND mdl_user_info_data.fieldid = 1
AND mdl_grade_items.itemname IS NOT NULL
AND mdl_course.category = 27
OR mdl_grade_items.itemname LIKE '%asa%'
GROUP BY mdl_user.firstname, mdl_user.lastname, mdl_grade_grades.timecreated
``` | AFAIK, the svc is useless if you are not working with IIS. Outside of IIS, the so called self hosted approach, needs you to write a something like this, where HelloWorldWcfServiceMessage is your type implementing the service contract. Additionaly, don't forget to configure an endpoint for the server and to make sure you are allowed to open a service on the configured port. The following code you can use in windows service or in a console program (better for testing and debugging). Hope that helps and I got your question right.
```
...
this.serviceHost = new ServiceHost(typeof(HelloWorldWcfServiceMessage));
this.serviceHost.Open();
...
public class HelloWorldWcfServiceMessage : IHelloWorldWcfServiceMessage
{
}
[ServiceContract(Namespace = "http://HelloWorldServiceNamespace", Name = "PublicHelloWorldWCFService")]
public interface IHelloWorldWcfServiceMessage
{
[OperationContract]
string HelloWorldMessage(string name);
}
``` |
12377 | Every time I try to run the create.sql file I have, it says there is a syntax error near "buy\_price". The rest of the tables work fine, though.
```
create table Item (itemID string PRIMARY KEY,
name string,
currently string,
buy_price string,
first_bid string,
started string,
ends string,
userID string references User,
description string,
constraint ch_buy_price check buy_price >= first_bid,
constraint );
```
Any help would be appreciated. | The check constraint needs to be enclosed in parentheses:
```
create table item
(
itemid string primary key,
name string,
currently_ string,
buy_price string,
first_bid string,
started string,
ends string,
userid string references user,
description string,
constraint chk_buy_price check (buy_price >= first_bid) --<< here
);
```
You also have an additional `constraint` at the end which needs to be removed. |
12446 | I'm studying for my AIRAT (Instructor Written) in Canada using Nizus, but I've always had trouble answering the questions about takeoff distances at 27 ºC when given [a chart](https://i.imgur.com/9HnYWR3.png) with 20 ºC and 30 ºC, even since the start of PPL. I want to understand it properly so I can effectively teach my students who have troubles with it.
The question is a two-parter, first calculating a takeoff distance with the following values:
| Parameter | Value |
| --- | --- |
| Airport Temp | 30 ºC |
| Airport Elevation | 3000' AMSL |
| Altimeter | 30.92 inHg |
| Wind | 10 kt tailwind |
| Flaps | 10º |
| Runway | Dry grass |
With a pressure altitude of 2089', the nearest values are the 2000' and 3000'. I used the 2000' line of the chart.
At 2000' at 20 ºC the ground roll is 1080' and the total to clear a 50' obstacle is 1895'.
At 30 ºC it's 1155' and 2030'.
The chart says it's configured as follows: 2300 lbs, flaps 10, full power prior to brake release on a paved level dry runway with no wind. The notes state:
*Headwind subtract 10% per 9kts*
*Tailwind add 10% every 2kts up to 10kts*
*Dry, Grass Runway or Gravel add 15% to ground roll.*
At 30 ºC I calculate my takeoff distance being **3305'**.
---
The second part asks by how much the takeoff distance will decrease if the temperature drops to 27 ºC.
I'm doing something wrong here as I calculate it to be 3250', with a distance change of 55', but that isn't an available answer.
How do I get the proper numbers to use at 27 ºC? | I am getting a takeoff distance of 3,502' for a takeoff over 50 FT on a **grass strip** with a **10-kt tailwind**.
>
> 2,030' \* 1.5 \* 1.15 = 3,502'
>
>
>
To interpolate you will need to figure out the percentage between the two temperatures.
>
> (27-20) / (30-20) = 70%
>
>
>
We take the difference between the chart values for 20 and 30 degrees
>
> 3,502' - 3,267' = 235'
>
>
>
Multiply this value by the percentage
>
> 235 \* .7 = 164.5'
>
>
>
Since we subtracted the 27 from the lower value we will add in this number ot the lower chart value
>
> 3,267 + 164.5 = 3,432
>
>
>
This is a difference of 70 feet. |
12768 | I am studying python, and although I think I get the whole concept and notion of Python, today I stumbled upon a piece of code that I did not fully understand:
Say I have a class that is supposed to define Circles but lacks a body:
```
class Circle():
pass
```
Since I have not defined any attributes, how can I do this:
```
my_circle = Circle()
my_circle.radius = 12
```
The weird part is that Python accepts the above statement. I don't understand why Python doesn't raise an `undefined name error`. I do understand that via *dynamic typing* I just bind variables to objects whenever I want, but shouldn't an attribute `radius` exist in the `Circle` class to allow me to do this?
**EDIT**: Lots of wonderful information in your answers! **Thank you everyone for all those fantastic answers!** It's a pity I only get to mark one as an answer. | It creates a `radius` data member of `my_circle`.
If you had asked it for `my_circle.radius` it would have thrown an exception:
```
>>> print my_circle.radius # AttributeError
```
Interestingly, this does not change the class; just that one instance. So:
```
>>> my_circle = Circle()
>>> my_circle.radius = 5
>>> my_other_circle = Circle()
>>> print my_other_circle.radius # AttributeError
``` |
13435 | I am using Laravel 5.6
I have 2 tables. User Table and Role Table in migration folder. Also installed Laravel Passport
When I ran this command `php artisan migrate`, I saw the auth tables created first and then role table and user table.
Can I run role and user table first because I want to put reference constraints in auth table? | I don't know the exact stuff Laravel Passport does, but in general the Migrator classes are ran in alphabetic order. Given the fact that they are prefixed with the generation timestamp, it should be enough just renaming the role migrator, to having a timstamp before the user migrator.
When you do this, don't forget to regenerate the autoload files. |
13597 | In my database i have a column name called `IsStaff` which has a return value as a bit. So staff in a company can have a illness(1) or that staff had no illness(0). How would i write a sql query that can count all the numbers of 1's and 0's between a specific date's and represent it in a jquery table. This is what i have done:
```
public List<Staff> Method(string Date1, string Date2)
{
DateTime d = Convert.ToDateTime(Date1);
string date1 = d.ToLongDateString();
DateTime dd = Convert.ToDateTime(Date2);
string date2 = dd.ToLongDateString();
List<Staff> LBD = new List<Staff>();
SqlConnection conn = new SqlConnection etc...
SqlCommand command = new SqlCommand(@"SELECT * From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
ORDER BY Time Desc", conn);
command.Parameters.AddWithValue("@Time", date1);
command.Parameters.AddWithValue("@Time2", date2);
conn.Open();
SqlDatadata data = command.Executedata();
while (data.Read())
{
Staff l = new Staff();
l.IsStaff = data["IsStaff"].ToString();
l.Name = data["Name"].ToString();
........
LBD.Add(l);
}
conn.Close();
return LBD;
}
```
>
> i can successfully get the data between two dates but how do i get total number of time a specific staff is been ill?
>
>
>
```
function Table(data) {
var table = '<table><tr><th>Name</th><th>Sum of ill staff</th><th>sum of none ill staff</th>';
var rowID = 0;
for (var staff in data) {
var row = '<tr class=\'staff-row\'id=\'' + data[student].StaffID + '\'</tr>';
row += '<td>' + data[staff].Name+ '</td>';
row += '<td>' + data[staff].IsStaff + '</td>';
row += '<td>' + data[staff].IsStaff + '</td>'
rowID++;
table += row;
}
table += '</table>';
$('#displayTable').html(table);
}
```
This is my dynamic generated table, first column is 'Name' which displays all the staff, second column is 'Sum of ill staff' that should display a staff who been ill for a specific date and final column is 'sum of none ill staff' that should display a staff who been not ill
**Q1** - what would be my sql query for counting a number of staff?
**Q2** - how do i add all 1's up and display it on my table? | Why don´t you compute the values in the SQL?
```
SqlCommand command = new SqlCommand(@"SELECT StaffID, Name, sum(IsStaff),
sum(case when IsStaff = 1 then 0 else 1 end)
From TableName
WHERE Cast([Time] AS DATE) > @Time
AND CAST([Time] AS DATE) < @Time2
GROUP BY StaffID, Name
ORDER BY Time Desc", conn);
```
or use Linq to get the values computed from the list of Staff. |
13730 | I created some public method in controller which does some work.
```
def current_service
service_name = params[:controller].gsub('api/v1/', '').gsub(%r{/.+}, '')
end
```
I would like to test this method using RSpec but I dont't know how can I stub params. How should I do that? | If this is a controller spec, you should be able to do something like this:
```
allow(controller).to receive(:params).and_return({controller: 'a value'})
```
Alternatively, move the `params[:controller]` statement to a separate method and stub that in your spec. |
13872 | I am trying to connect to Salesforce using node js / jsforce library and use promises. Unfortunately one of the methods is executing prior to getting connection.
i have method A : makeconnection which returns the connection
i have method B : which loads data from Salesforce based on the connection reference from method A
I have method C : which gets dependencies from Salesforce based on connection from method A
I would like the following order to be executed A ==> B ==> C
Unfortunately C seems to run first followed by A and B so the connection is null and it fails
roughly this is the code
```
let jsforce = require("jsforce");
const sfdcSoup = require("sfdc-soup");
const fs = require("fs");
let _ = require("lodash");
let trgarr = [];
let clsarr = [];
let entityarr = [];
function makeConnection() {
return new Promise((resolve,reject) => {
const conn = new jsforce.Connection({
loginUrl: "https://test.salesforce.com",
instanceUrl: "salesforce.com",
serverUrl: "xxx",
version: "50.0"
});
conn.login(username, password, function (err, userInfo) {
if (err) {
return console.error(err);
}
// console.log(conn.accessToken);
//console.log(conn.instanceUrl);
//console.log("User ID: " + userInfo.id);
//console.log("Org ID: " + userInfo.organizationId);
console.log("logged in");
});
resolve(conn);
});
}
function loadClasses(conn) {
return new Promise((resolve,reject) => {
const querystr =
"select apiVersion,name,body from apexClass where NamespacePrefix = null";
let query = conn
.query(querystr)
.on("record", function (rec) {
clsarr.push(rec);
})
.on("end", function () {
console.log("number of class is " + clsarr.length);
console.log("loaded all classes");
});
resolve(conn,clsarr);
});
}
async function getDependencies(conn) {
return new Promise((resolve,reject) => {
let entryPoint = {
name: "xxx",
type: "CustomField",
id: yyy
};
let connection = {
token: conn.accessToken,
url: "abc.com",
apiVersion: "50.0"
};
let usageApi = sfdcSoup.usageApi(connection, entryPoint);
usageApi.getUsage().then((response) => {
console.log(response.stats);
console.log(response.csv);
});
});
}
async function run() {
makeConnection().then(conn => loadClasses(conn)).then(conn=>getDependencies(conn));
}
run();
```
I keep getting an error that says **UnhandledPromiseRejectionWarning: Error: Access token and URL are required on the connection object**
The reason is connection needs to be obtained from method A and sent to Method C , which is not happening. Can you please guide where i might be wrong?
Also why is method C getting executed before A and B. **why does my promise chaining not work as promised**?
I am running the code in Vscode and using Node 14 | Here is your fix :
```
<div>
<label class="radio" v-for="singleGender in genders">
<input type="radio" v-model="gender" v-bind:value="singleGender.code">
{{singleGender.description}}
</label>
</div>
<div>{{gender}}</div>
```
And here is your data :
```
data: {
gender: "M",
genders: [
{
code: "F",
description: "Female"
},
{
code: "M",
description: "Male"
}
]
}
```
There is no need to use click event for store it's value to Model.
Note : maybe in template selector, html can't render in DOM until you render it manually. |
13892 | I need to run tests in different environments : `DEV`, `STAGING`, `PRODUCTION`.
And needless to say, the environment variables/secrets for the above environments would obviously be different.
I quick solution would be to have an env file for each environment like `dev.env`, `staging.env` & `prod.env`
But according to the docs of popular dotEnv npm package and 12 Factor app, it is not recommended to have multiple `.env` files in your repo.
Please give me a practical solution of managing env vars for multiple environments.
* <https://github.com/motdotla/dotenv#should-i-have-multiple-env-files>
* <https://12factor.net/config> | If I understand correctly what they're writing here:
**Should I have multiple .env files?**
>
> No. We strongly recommend against having a "main" .env file and an "environment" .env file like .env.test. Your config should vary between deploys, and you should not be sharing values between environments.
>
>
>
This doesn't mean that you shouldn't have multiple env files, but rather that you shouldn't have one `main.env` file with all the default configuration and additional env files (one per environment) that inherit from `main.env` and override certain values.
The reason why it's not recommended is that with such a configuration it's difficult to understand "where a specific value is coming from?" (from which one of the following: main-env-file, specific-env-file, env-variable, code-default and etc).
That said, if you create multiple env files without such a "main" this means that you'll need to duplicate many of the values all over the different env files, which is better because of explicitness, but has the downside of duplication/verbosity.
Configuration is not trivial IMO and while you have only a small project it doesn't matter much how you choose to implement, but if we're talking about something more critical like a company's product, then there are many solutions available out there, some are open-source and free, some cost money, but it's worth doing your research and figure out which one provides you the benefits that are more meaningful to your use-case.
Some of the more popular tools are: [Puppet](https://puppet.com/), [Ansible](https://www.ansible.com/), and [Chef](https://www.chef.io/products/chef-infra). |
14150 | ```
// RecursiveBinarySearch.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#define N 9
int RecursiveBinarySearch(int A, int low, int high, int x);
int main()
{
int A[N];
int index = 0;
//Put
A[0] = 2;
A[1] = 6;
A[2] = 13;
A[3] = 21;
A[4] = 36;
A[5] = 47;
A[6] = 63;
A[7] = 81;
A[8] = 97;
printf("Elements in Array A\n");
while (index <= 8) {
printf("%d ", A[index]);
index++;
}
printf("\nLocation(index) of element 63\n");
printf("%d", RecursiveBinarySearch(A, 0, 8, 63));
return 0;
}
int RecursiveBinarySearch(int A, int low, int high, int x) {
//Base Condition
if (low > high)
return -1;
int mid = low + (high - low) / 2;
if (x == A[mid])
return mid;
else if (x < A[mid])
return RecursiveBinarySearch(A, low, mid - 1, x);
else
return RecursiveBinarySearch(A, mid + 1, high, x);
}
```
Here's first problem.
Visual studio says
int A[9]
argument of type "int\*" is incompatible with parameter of type "int"
Here's second problem.
int mid
expression must have pointer-to-object type
I don't know well about pointer so i want to know why this code can't be compiled and how to use pointer in this code. | Better remove all assignements `A[0] = ..., A[1] = ...` alltogether and write:
```
int A[] = {2,6,13,21,36,47,63,81,97}
```
And replace
```
while (index <= 8)
```
by:
```
while (index < sizeof(A)/sizeof(A[0]))
```
`sizeof(A) / sizeof(A[0])` is the number of elements if the array `A`. `sizeof(A)` is the size in bytes of the whole array, and `sizeof(A[0])` is the size of one elements of the array in bytes.
---
But the real problem is here:
Replace:
```
int RecursiveBinarySearch(int A, int low, int high, int x)
```
by
```
int RecursiveBinarySearch(int A[], int low, int high, int x)
```
There may be more errors though. |
14604 | I know this question has been asked before. I checked through multiple answers on this site,
for example:
[Wordpress loop with different bootstrap columns](https://stackoverflow.com/questions/54568904/wordpress-loop-with-different-bootstrap-columns)
<https://wordpress.stackexchange.com/questions/222278/how-to-separate-posts-loop-in-to-two-columns/222281>
... but I cannot work out how to integrate answers with my code (assuming that is possible).
I want to display a list of Categories and their related posts on a page.
The code I'm using works fine BUT displays the results in a single column down the page:
[![enter image description here](https://i.stack.imgur.com/ukuUZ.jpg)](https://i.stack.imgur.com/ukuUZ.jpg)
I want to split the display into 2 columns, like in the image below, if possible:
[![enter image description here](https://i.stack.imgur.com/dRfHK.jpg)](https://i.stack.imgur.com/dRfHK.jpg)
The code I'm using (currently placed in a new page template) is as follows:
```
<?php
/*
* Template Name: Alphabetical List
*/
get_header();
// Grab all the categories from the database that have posts.
$categories = get_terms( 'category', 'orderby=name&order=ASC');
// Loop through categories
foreach ( $categories as $category ) {
// Display category name
echo '<h2 class="post-title">' . $category->name . '</h2>';
echo '<div class="post-list">';
// WP_Query arguments
$args = array(
'cat' => $category->term_id,
'order' => 'ASC',
'orderby' => 'title',
);
// The Query
$query = new WP_Query( $args );
// The Loop
if ( $query->have_posts() ) {
while ( $query->have_posts() ) {
$query->the_post();
?>
<p><a href="<?php the_permalink();?>"><?php the_title(); ?></a></p>
<?php
} // End while
} // End if
echo '</div>';
// Restore original Post Data
wp_reset_postdata();
} // End foreach
get_footer();
?>
```
Wondering if anyone can help me to get this code to display loop results in 2 columns.
Many thanks.
**UPDATE TO QUESTION**
Karl, thanks for your answer. Your script works, but with a small problem:
The Categories/Related Posts display in 2 columns but a 'gap/space' appears in the middle of the display of data (see image below):
[![enter image description here](https://i.stack.imgur.com/BWjin.jpg)](https://i.stack.imgur.com/BWjin.jpg)
I added to your code slightly so I could display a custom field I inserted into each post. I'm not sure if this has caused the problem.
Altered code (changes are immediately after $query->the\_post();):
```
<?php
/*
* Template Name: Alphabetical List
*/
get_header();
?>
<div style="height:100px"></div>
<?php
// Grab all the categories from the database that have posts.
$categories = get_terms( 'category', 'orderby=name&order=ASC');
// Loop through categories
echo "<div class='new-column'>";
$counter = 0;
foreach ( $categories as $category ) {
if($counter % 4 == 0 && $counter !=0){
echo "<div class='new-column'>";
}
// Display category name
echo '<h2 class="post-title">' . $category->name . '</h2>';
echo '<div class="post-list">';
// WP_Query arguments
$args = array(
'cat' => $category->term_id,
'order' => 'ASC',
'orderby' => 'title',
);
// The Query
$query = new WP_Query( $args );
// The Loop
if ( $query->have_posts() ) {
while ( $query->have_posts() ) {
$query->the_post();
$customfieldvalue = get_post_meta($post->ID, "PDF", true);
?>
<p><a href="<?php echo $customfieldvalue; ?>" target="_blank"><?php
the_title(); ?></a></p>
<?php
} // End while
} // End if
echo '</div>';
// Restore original Post Data
wp_reset_postdata();
$counter++;
if($counter % 4 == 0){
echo "</div>";
}
} // End foreach
if($counter % 4 != 0){
echo "</div>";
}
get_footer();
?>
``` | I've used bootstrap classes (row, col-6). Checked the size of categories array and used 2 variables - one as a counter and the other one to check if the column is first or second.
```
<?php
/*
* Template Name: Alphabetical List
*/
get_header();
// Grab all the categories from the database that have posts.
$categories = get_terms( 'category', 'orderby=name&order=ASC');
//get size of category
$catSize = sizeof($categories);
$j = 1;
$n = 1;
// Loop through categories
foreach ( $categories as $category ) {
if($n == 1){
echo '<div class="row">';
}
echo'<div class="col-6">';
// Display category name
echo '<h2 class="post-title">' . $category->name . '</h2>';
echo '<div class="post-list">';
// WP_Query arguments
$args = array(
'cat' => $category->term_id,
'order' => 'ASC',
'orderby' => 'title',
);
// The Query
$query = new WP_Query( $args );
// The Loop
if ( $query->have_posts() ) {
while ( $query->have_posts() ) {
$query->the_post();
?>
<p><a href="<?php the_permalink();?>"><?php the_title(); ?></a></p>
<?php
} // End while
} // End if
echo '</div></div>';
if($n == 1){
if($j == $catSize){
echo '<div class="col-6"></div>
</div>';
}
else{
$n = 2;
}
}
else{
echo '</div>';
$n =1;
}
$j++;
}
// Restore original Post Data
wp_reset_postdata();
} // End foreach
get_footer();
?>
``` |
15027 | At the moment, I am working on a project that requires me to add three videos to the homepage, but loading them all at once will reduce the load time considerably.
Also i want to use `<video/>` tag instead of using `<iframe/>` because i want that autoplay functionality.
What's the best way to do this in React? Using NextJS and Chakra UI. | You can use [`IntersectionObserver`](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API) and do it as below. For React all you have to do is to add the below code in an `useEffect` with empty dependency.
```js
const video = document.querySelector("video");
function handleIntersection(entries) {
entries.map(async (entry) => {
if (entry.isIntersecting) {
const res = await fetch("/video.mp4");
const data = await res.blob();
video.src = URL.createObjectURL(data);
}
});
}
const observer = new IntersectionObserver(handleIntersection);
observer.observe(video);
```
```html
<video autoplay muted loop playsinline></video>
```
Also I used a video with a relative path to avoid possible [CORS](https://developer.mozilla.org/en-US/docs/Glossary/CORS) issues. |
15133 | I have a service that fetches some client data from my server:
```
app.factory('clientDataService', function ($http) {
var clientDataObject = {};
var cdsService = {
fetch: function (cid) {
//$http returns a promise, which has a then function, which also returns a promise
var promise = $http.get('/clients/stats/' + cid + '/').then(function (response) {
// The then function here is an opportunity to modify the response
console.log(response);
// The return value gets picked up by the then in the controller.
clientDataObject = {'data': response.data, 'currentClientID': cid};
return clientDataObject;
});
// Return the promise to the controller
return promise;
}
};
return cdsService;
});
```
Then in one controller I do:
```
//get stats
clientDataService.fetch($scope.id).then(function (response) {
$scope.client_data = {
'statistics': response.data
}
});
```
Which all works very well. However, I'm trying to do a watch from another controller on that service to update it's scope when the data changes, rather then having to re-kick off the http request:
```
$scope.$watch('clientDataService.clientDataObject', function (cid) {
alert(cid);
});
```
I'm just alerting for now, but it never ever triggers. When the page initially loads, it alerts "undefined". I have no errors in the console and all the $injects are fine, but it never seems to recognize that data has changed in the service. Am I doing something wrong in the watch?
Many thanks
Ben | clientDataService.clientDataObject is not part of your controller's scope, so you can't watch for changes on that object.
You need to inject the $rootScope into your service then broadcast the changes to the controllers scopes.
```
app.factory('clientDataService', function ($rootScope, $http) {
var clientDataObject = {};
var cdsService = {
fetch: function (cid) {
var promise = $http.get('/clients/stats/' + cid + '/').then(function (response) {
// The then function here is an opportunity to modify the response
console.log(response);
// The return value gets picked up by the then in the controller.
clientDataObject = {'data': response.data, 'currentClientID': cid};
$rootScope.$broadcast('UPDATE_CLIENT_DATA', clientDataObject);
return clientDataObject;
});
// Return the promise to the controller
return promise;
}
};
return cdsService;
});
```
Then in the controller you can listen for the change using:
```
$scope.$on('UPDATE_CLIENT_DATA', function ( event, clientDataObject ) { });
``` |
15140 | Trying to create a socketio link between flask server and reactjs client. It shows this error
"Access to XMLHttpRequest at '<http://127.0.0.1:5000/socket.io/?EIO=3&transport=polling&t=MrcruFC>' from origin '<http://localhost:3000>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource."
I have tried including CORS from the flask cors documentation but it still doesnot work.
Server:
```
from flask import Flask, Response
from flask_cors import CORS
from flask_socketio import SocketIO
app = Flask(__name__)
cors = CORS(app)
socketio=SocketIO(app)
@socketio.on('connection')
def handle_my_custom_event():
socket.emit('outgoing data',{num: '10'})
@app.route("/")
def hello():
return 'Hello'
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000)
``` | You can add an option for creating SocketIO.
```
socketio = SocketIO(app=app, cors_allowed_origins='*')
``` |
15348 | I am making a "Treasure hunt" trails app. A user can go to a location and when they reach that location an action will happen. To monitor the users location I am using Gelocation.watchposition() from "react-native-geolocation-service".
The point of interest (trailLocation) is got from our API and I set the "trailLocation" state with this, it updates correctly and all looks good. `(initialTrailLocation passed from previous screen)`
```
const [trailLocation, setTrailLocation] = useState(initialTrailLocation);
/**
* Function to get the next trail location
*/
let getNextLocation = async () => {
setIsLoading(true);
const credentials = await Keychain.getGenericPassword();
await axios.get(BACKEND_URL + 'api/v1/trails/' + trail.id + '/user/' + credentials.username + '/next-location?include=trail_location_ar_object.ar_object.ar_object_resources', {
headers: {
Authorization: 'Bearer ' + credentials.password,
},
},
)
.then(response => {
setTrailLocation(response.data.data.trail_location);
setIsLoading(false);
setUserWithinRange(false);
})
.catch(error => {
console.log(error);
});
};
/**
* Function called when the position changes
*/
function newPositionHandler(data) {
console.log("new position handler");
let radius = 1000;
let ky = 40000 / 360;
console.log(trailLocation);
let kx = Math.cos((Math.PI * trailLocation.lat) / 180.0) * ky;
let dx = Math.abs(trailLocation.lon - data.coords.longitude) * kx;
let dy = Math.abs(trailLocation.lat - data.coords.latitude) * ky;
setDistance(Math.sqrt(dx * dx + dy * dy));
console.log(Math.sqrt(dx * dx + dy * dy));
console.log('-------------------');
if(Math.sqrt(dx * dx + dy * dy) <= radius / 1000) {
setUserWithinRange(true);
} else {
setUserWithinRange(false)
}
};
/** Function called to initialise the watch position functionality */
async function watchPosition() {
console.log("watch position");
Geolocation.watchPosition((data) => newPositionHandler(data), (error) => console.log(error),{
enableHighAccuracy: false,
timeout: 10000,
maximumAge: 1000,
distanceFilter: 5,
},
);
};
```
However, when the success function from watchposition is triggered, it uses the original value of the "trailLocation" state and hence calculates the wrong distance between the user location and new trailLocation point. I can't understand why this is as all other functions use the correct state value. I log the values out and I can clearly see it using the initial state, but all other actions use the current state and the new trailLocation parameters are displayed on the screen.
Any help would be greatly appreciated. I can provide more details, it's my first question so cut me some slack ;)
Thanks | The data in your function is outdated - what's often referred to as a "stale closure". When you write `Geolocation.watchPosition((data) => newPositionHandler(data), ...`, the function is created with the state that exists at the time. When the function runs, this data has become outdated.
You can read more about solutions to this problem in this related question: [How To Solve The React Hook Closure Issue?](https://stackoverflow.com/questions/62806541/how-to-solve-the-react-hook-closure-issue) |
15627 | I am an international student in a interdisciplinary PhD in the social sciences.
1. My supervisor has about 60 students and does not do any advising beyond a few personal favorites.
2. My second supervisor considers me a student of the first, resents the fact that he is the expert on the subject, and as a result won't read my thesis either. This has gone on for seven years and now I must submit. I speak four European languages and my thesis is in an area which is very relevant. Right now I can t see any future for myself as there are no jobs back home.
1.What do I do about my supervisors as the PhD is graded here?
2.Can I apply for post docs/tenure track (community college is more than fine) in North America with a PhD from Germany, or do they prefer N.American PhDs? How do I go about it? I have publications.
I'd be grateful for any advice/career suggestions etc. I don't mind leaving academia but I cannot return home as there are no jobs there. | It is not uncommon that a German professor does not give advise to their PhD students. Neither is it uncommon that the 2nd referee does not want to read your thesis. In Germany, whether your dissertation is good enough or not is mainly decided by the first advisor. If they say it's ok, then it's ok. Your advisor theirself is the chairman of the exam committee, the other members would not say "no" to him.
That is, there is no need to worry about your situation.
If I were you, I would just write up my dissertation, give it to my advisor, and ask them for advice. Then, from what I know, the advisor would find some time to read it, and tell you something to improve (they have to pretend as if they understood your thesis). Then you could do the improvements, mail the thesis to the 2nd referee. Again this guy would also tell you something to improve, you do it, and that's it. |
15648 | I want to write unique initials before listing an array of names, so given
const names = ["Bill", "Jack", "john"], I would like to print something like:
```
<ul>
<li>B
<ul>
<li>Bill</li>
</ul>
</li>
<li>J
<ul>
<li>John</li>
<li>jack</li>
</ul>
</li>
</ul>
```
The way I found to do this is to push the JSX into an array before rendering it like:
```
const RenderNames = () => {
let initials = [];
let renderData = [];
names.forEach(name => {
let initial = name.charAt(0).toUpperCase();
if(initials.indexOf(initial) === -1){
initials.push(initial)
renderData.push(<li>{initial}</li>)
}
renderData.push(<li>{name}</li>)
});
return <ul>{renderData}</ul>;
}
```
But I feel the code is a bit clunky, and I can only push in tags that are immediately closing. Is this the best way to do things or could it be done better? | Here we go:
```
const names = ['Bill', 'Jack', 'john', 'Alex'];
const groupedNames = names.reduce((accumulator, name) => {
// first char of name, uppercased
const firstLetter = name[0].toUpperCase();
// check if data for key exist
const namesList = accumulator[firstLetter] || [];
// check if name in array exist to prevent duplicates
// keep in mind for example John and john are not the same
if (!namesList.includes(name)) {
namesList.push(name);
}
// collect data and return
return {...accumulator, [firstLetter]: namesList}
}, {});
```
and result is
```
{ B: [ 'Bill' ], J: [ 'Jack', 'john' ], A: [ 'Alex' ] }
```
Then you can sort keys and `map()` over it. |
15816 | I'm trying to display a static image located in the same folder as my Html file but it seems I can't get the right path for it to display correctly. The application I'm developing is an atlassian plugin that also includes a java backend to get Data from the Database and I'm displaying it on the frontend using HTML and javascript, the whole application runs on a webserver as a Plugin. Both the image and the Html file are located in here: `D:\clone4\project\src\main\resources\templates\scheduleraction`
The URL path for the web application is:
`https://staging.com/jira/secure/SchedulerAction!default.jspa`
I tried many ways and this is the last one :
`<img src="/SchedulerAction!default.jspa/piechart.jpg" alt="pie-chart">`
I need to add the correct path in the "src" so the client can retrieve the image from my Files on the webserver. I would love any hint or help!
[![enter image description here](https://i.stack.imgur.com/kdOj9.png)](https://i.stack.imgur.com/kdOj9.png) | ```
s=df.mask((df-df.apply(lambda x: x.std() )).gt(5))#mask where condition applies
s=s.assign(A=s.A.fillna(s.A.max()),B=s.B.fillna(s.B.max())).sort_index(axis = 0)#fill with max per column and resort frame
A B
0 1.0 2.0
1 1.0 6.0
2 2.0 8.0
3 1.0 8.0
4 2.0 1.0
``` |
16024 | I'm running a pretty time-consuming method in a thread, and one of the things it does, is if there is no image available it sets a grid named Background's background to a solid color. Here's how that snippet looks:
```
SolidColorBrush scb = new SolidColorBrush();
scb.Color = Color.FromRgb(21, 21, 21);
Dispatcher.BeginInvoke(new Action(() => Background.Background = scb));
```
But I always get errors at this place saying `"Cannot use a DependencyObject that belongs to a different thread than its parent Freezable"`
Does anyone know why this is happening? The Dispatcher should make this problem go away, right?
Here's how I am calling the method by the way (if needed)
```
Thread BGthread = new Thread(HandleBackgrounds);
BGthread.Start();
``` | `SolidColorBrush` is a dependency object - and you're creating it in the non-UI thread, then trying to use it in the UI thread. Try this instead:
```
Action action = () =>
{
SolidColorBrush scb = new SolidColorBrush(Color.FromRgb(21, 21, 21));
Background.Background = scb;
};
Dispatcher.BeginInvoke(action);
```
Or of course just in one statement:
```
Dispatcher.BeginInvoke((Action (() =>
Background.Background = new SolidColorBrush(Color.FromRgb(21, 21, 21)))));
```
Either way, you're creating the `SolidColorBrush` in the action that you're passing to the dispatcher. |
16119 | I have a script where I keep time of when I start and finish. This code works on Linux, but not on my MacOS Sierra v 10.12.6
```
start=`date +"%a %b %d %Y %r"`
end=`date +"%a %b %d %Y %r"`
elapsed_time=`date -d @$(( $(date -d "$end" +%s) - $(date -d "$start" +%s) )) -u +'%H:%M:%S'`
```
The error I get is:
```
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
[-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
[-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
-bash: - : syntax error: operand expected (error token is " ")
```
Is there a way to change this so that it works on my Mac? | **Yes**, you definitely **can track moving surfaces and moving objects** in `ARCore`.
If you track static surface using `ARCore` – the resulted features are mainly suitable for so-called `Camera Tracking`. If you track moving object/surface – the resulted features are mostly suitable for `Object Tracking`.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate `xyz` and rotate `xyz`) camera transform.
>
> Watch [this video](https://www.reddit.com/r/oculus/comments/70tz8p/instead_of_tracking_the_static_environment_in/) to find out how they succeeded.
>
>
>
[![enter image description here](https://i.stack.imgur.com/agUy4.jpg)](https://i.stack.imgur.com/agUy4.jpg) |
16180 | I'm currently building a food application which has users select a range of options (perishable, non-perishable ) and (snack, produce or meal) via a set of radio buttons. I'm currently using node.js and sqlite 3 to query a database to determine which entries to return to the user after they search the database.
I want to write a query such that when the booleans from the client-side are sent over to the server, the server will choose the entries such that perishable if perishable is set to true on the client that the query will find just the perishable items and vice-versa. I also want the same functionality with
Example:
```
perishable = request.perishable.
non-perishable = request.non-perishable
snack = request.non-perishable
meal = request.non-perishable
produce = request.non-perishable.
var q = 'SELECT * FROM posts WHERE available == true AND (if perishable is true all rows where the perishable column is set to true... etc );
``` | How about (insert as many `row.names` as you want rows in the output `data.frame`):
```
data = data.frame(row.names = '1')
data[paste0('a', 1:10)] = .614
data[paste0('c', 1:10)] = -6.198
data[paste0('d', 1:10)] = 35.952
```
Or (column names won't be exactly right; thanks @Frank for simplifying my approach here)
```
data.frame(a = .641, c = -6.198, d = 35.052)[ , rep(1:3, each = 10)]
``` |
16316 | I need to query Sql Server and get an ID that identifies the machine on which SQL Server is installed. An ID that works well also in some complex scenarios like failover clusters, or similar architectures.
I need this for license check, i bind my licence key to a ID, currently i am creting "my id" using a combination of database creation date and server name, anyway this is not very good, expecially because it is a Database ID, not a Server ID.
In the past I used to read the harddisk serial number with an extended stored procedure, but isn't it there (at least in sql server 2008) a simple way to get this id?
I don't want to use a CLR stored procedure. | >
> et an ID that identifies the machine on which SQL Server is installed. An ID that works well
> also in some complex scenarios like failover clusters, or similar architectures.
>
>
>
So waht do you want? Identify the MACHINE or identify the CLUSTER and handle enterprise senarios?
In general this is not poossible wthout extended stored procedures or CLR. Point. You rule out all approache, then there are none left. And the fact that you did read the hard disc number already implies that you are NOT prepared to handle larger installs. What do you read if there is no hard disc (as disc), but a LUN on a SAN? What do you do in case the hard disc is a hardware raid controller? the approach you took from the start was pretty much not working but you failed to see it because you dont deal with enterprise customers ;) Which also means you dont handle complex scenarios (clusters etc.) ;)
As gbn says, don't bother enterprise customers with childish copy protections, put a license agreement in place. In general enterprise software has few technical limitations to allow enterprises to provide good service without running through hoops to get anoether licensing key jsut for a lab install. |
16640 | I'm trying to build a RecyclerView filled with a card view items. For each item I need 2 small images that I load from URL. Everything works fine only if I load sample image from Picasso website ([http://i.imgur.com/DvpvklR.png](https://i.imgur.com/DvpvklR.png)). Every other picture I try to load doesn't show up.
Here's my code; inside my RecyclerViewAdapter, inside the onBindViewHolder() function I'm calling:
```
fun loadImageInBackground(item : Footballer, holder : ViewHolder){
doAsync {
var loadImage = Picasso.get().load(item.footballerImageUrl)
var loadFlagImage = Picasso.get().load(item.flagImageUrl)
uiThread {
loadImage.into(holder?.footballerImage)
loadFlagImage.into(holder?.flagImage)
}
}
}
```
I'm using Kotlin ANKO for "doAsync".
Any ideas? Thanks in advance! | For loops in R all run in the same scope, which means a variable defined in the loop will be shared by all iterations. This is an issue if you create a function in each loop iteration that accesses this variable, and assume that it'll be unique for each iteration.
Here's a simple demo:
```
counter <- 0; funcs <- list()
for (i in 1:3) {
counter <- counter + 1
funcs[[i]] <- function() print(counter)
}
for (i in 1:3) {
funcs[[i]]() # prints 3 3 3
}
```
In this Shiny app, the `observeEvent` handler accesses the local variable `add`, and doesn't get called until after the for loop is over, and `add` is at its final value.
There are a few ways to get around this and create a unique scope for each loop iteration. My favorite is to use an `apply` function to replace the for loop. Then each `apply` iteration runs in its own function so local variables are unique each item.
```
library(shiny)
# Define the UI
ui <- fluidPage(
#actionButton("adder", "Add"),
tags$div(id = 'placeholder')
)
# Define the server code
server <- function(input, output) {
rv <- reactiveValues(counter = 0)
lapply(1:3, function(i) {
isolate({
rv$counter <- rv$counter + 1
add <- sprintf("%03d",rv$counter)
#prefix <- generateRandomString(1,20)
filterId <- paste0('adder_', add)
divId <- paste0('adder_div_', add)
elementFilterId <- paste0('adder_object_', add)
removeFilterId <- paste0('remover_', add)
insertUI(
selector = '#placeholder',
ui = tags$div(
id = divId,
actionButton(removeFilterId, label = "Remove filter", style = "float: right;"),
textInput(elementFilterId, label = paste0("Introduce text #",rv$counter), value = "")
)
)
})
# Observer that removes a filter
observeEvent(input[[removeFilterId]],{
removeUI(selector = paste0("#", divId))
})
})
}
# Return a Shiny app object
shinyApp(ui = ui, server = server, options = list(launch.browser = T))
```
Note that I also removed the outer `observeEvent` since the server function runs on session initialization anyway. |
16768 | I have run into a couple of problems while trying to convert an existing JDBC application to use HSQLDB version 2.2.9 (Currently the codebase runs successfully on MySQL, ORACLE and SQLServer, but an embedded database seemed like a good option too).
I will ask the questions one at a time and separately (although they all relate to the JDBC `ResultSet.deleteRow()` method, supported since HSQLDB 2.0)
Why does `rs.next()` return false after calling `rs.deleteRow()`?
Here is a complete self contained code sample (including simple table creation, sample inserts, and deleting the table at the end):
```
int deletedRows=0;
try{
Connection c = DriverManager.getConnection("jdbc:hsqldb:mem:mytestdb",
"SA", "");
String createSQL =
"create table test (num INTEGER PRIMARY KEY, str VARCHAR(25))";
Statement createStmt = c.createStatement();
createStmt.execute(createSQL);
createStmt.close();
String ins = "insert into test (num,str) values (?,?)";
PreparedStatement pStmt = c.prepareStatement(ins);
for(int i=0; i<100; i++){
pStmt.setInt(1, i);
pStmt.setString(2, "String"+i);
pStmt.execute();
}
// there should now be 100 rows in the table
String select = "SELECT * FROM test";
PreparedStatement stmt = c.prepareStatement(select,
ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet rs = stmt.executeQuery();
rs.beforeFirst();
while(rs.next()){
int num = rs.getInt("num");
if((num%7)==0){
System.out.println("Deleting row:"+num);
rs.deleteRow();
deletedRows++;
}
}
Statement dropStmt = c.createStatement();
dropStmt.execute("drop table test;");
dropStmt.close();
} catch (SQLException sqle) {
System.out.println("Deleted "+deletedRows+
" rows before exception: "+sqle.getMessage());
sqle.printStackTrace();
}
```
When running the same code on MySQL database, the output shows that every 7th row is deleted:
>
> Deleting row:0
>
>
> Deleting row:7
>
>
> ...
>
>
> Deleting row: 98
>
>
>
On HSQLDB, the output is:
>
> Deleting row:0
>
>
>
The `rs.next()` returns false after the first call to `rs.deleteRow()`.
I cannot find any information in the HSQLDB javadoc. Does anyone have any ideas? | As I also commented before, this sounds like a bug in the HSQLDB JDBC implementation. The JDBC 4.1 spec (section 15.2.4.2) says:
>
> After the method `deleteRow` has been called, the cursor will be positioned before the next valid row. If the deleted row is the last row, the cursor will be positioned after the last row.
>
>
>
This implies that the call to `next()` should have returned `true` (if the `ResultSet` contained more rows). |
16797 | I have following UDF used to convert time stored as a string into a timestamp.
```
val hmsToTimeStampUdf = udf((dt: String) => {
if (dt == null) null else {
val formatter = DateTimeFormat.forPattern("HH:mm:ss")
try {
new Timestamp(formatter.parseDateTime(dt).getMillis)
} catch {
case t: Throwable => throw new RuntimeException("hmsToTimeStampUdf,dt="+dt, t)
}
}
})
```
This UDF is used to convert `String` value into `Timestamp`:
```
outputDf.withColumn(schemaColumn.name, ymdToTimeStampUdf(col(schemaColumn.name))
```
But some CSV files have invalid value for this column causing `RuntimeException`. I want to find which rows have these broken records. Is it possible to access row information inside the UDF? | Instead of throwing a `RuntimeException` that kills your .csv parsing, maybe a better approach would be to have UDF returning a tuple (well-formed, corrupted) value. Then, you can easily segregate good/bad rows by selecting `is null`/`is not null` subsets.
```
def safeConvert(dt: String) : (Timestamp,String) = {
if (dt == null)
(null,null)
else {
val formatter = DateTimeFormat.forPattern("HH:mm:ss")
try {
(new Timestamp(formatter.parseDateTime(dt).getMillis),null)
} catch {
case e:Exception =>
(null,dt)
}
}
}
val safeConvertUDF = udf(safeConvert(_:String))
val df = Seq(("00:01:02"),("03:04:05"),("67:89:10")).toDF("dt")
df.withColumn("temp",safeConvertUDF($"dt"))
.withColumn("goodData",$"temp".getItem("_1"))
.withColumn("badData",$"temp".getItem("_2"))
.drop($"temp").show(false)
+--------+-------------------+--------+
|dt |goodData |badData |
+--------+-------------------+--------+
|00:01:02|1970-01-01 00:01:02|null |
|03:04:05|1970-01-01 03:04:05|null |
|67:89:10|null |67:89:10|
+--------+-------------------+--------+
``` |
17176 | using mobile detection (js or php), is it possible to display top navi made for mobile only?
also i see a lot of mobile detection in php - where is it supposed to be placed?
```
{php}
function isMobileBrowser($user_agent = '') {
foreach (array('iPhone','Android','Windows CE', 'PPC', 'Smartphone', 'IEMobile', 'Opera Mini') as $mobile_browser_ua_snippet) {
if (stristr($user_agent, $mobile_browser_ua_snippet)) {
return true;
}
}
return false;
}
{/php}
{if isMobileBrowser($_SERVER['HTTP_USER_AGENT'])}
{include file="templates/layouts/zones/mobileMenu.html"}
{/if}
```
now i got error: syntax error: unbalanced parenthesis in if statement => I am not sure abt that | Have you tried changing the query to:
```
mysql_query("UPDATE `upcoming` SET `title` = '$title', `date` = '$date', `repeat` = '$repeat', `location` = '$location', `location_link` = '$location_link', `group` = '$group', `group_link` = '$group_link', `notes` = '$notes', `enabled` = '$enabled' WHERE `id` = '$key' LIMIT 1") or die(mysql_error());
```
Edit: And as others have stated; you are using reserved words. I recommend always using the ` symbol. (This can be found at the top left for most keyboards: under the escape key, above the tab key, to the left of the number 1 key.) |
17224 | My Spring boot application packaged as ROOT.war on tomcat9 using Java 11 appears to load successfully but fails to map the controllers. I can view every page by going right to the .jsp but my controller's map the URL's without .jsp. If I go to the mapped URL I get the standard 404 page.
This app works locally but expresses this behavior on my remote VM.
I've not seen this before and can't find anything relevant in my logs. I have another application with the exact same setup that works fine. What am I missing and/or what should I try next?
Here is my context file:
```
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="true" path=""/>
```
Using standard spring application setup:
```
@SpringBootApplication
public class Application extends SpringBootServletInitializer{
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
```
And a basic request mapping and ModelAndView response:
```
@RestController
public class PageController extends BasePageController {
private static final Logger LOGGER = LoggerFactory.getLogger(PageController.class);
@Autowired private Configuration configuration;
@Autowired
private OrderManager orderManager;
@RequestMapping(value = "/", method=RequestMethod.GET)
public ModelAndView indexSlash(HttpServletRequest arg0, HttpServletResponse arg1) {
```
This works fine locally and for another app with the same setup, making me think its some minor version issue. I am using tomcat 9.0.34 and Java 11.0.7. There is nothing in my app log. Here is my catalina.out file after startup:
```
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
31-May-2020 15:33:46.744 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/9.0.34
31-May-2020 15:33:46.748 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Apr 3 2020 12:02:52 UTC
31-May-2020 15:33:46.748 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.34.0
31-May-2020 15:33:46.749 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
31-May-2020 15:33:46.749 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 5.3.0-1022-azure
31-May-2020 15:33:46.749 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
31-May-2020 15:33:46.749 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-11-openjdk-amd64
31-May-2020 15:33:46.749 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 11.0.7+10-post-Ubuntu-2ubuntu218.04
31-May-2020 15:33:46.750 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Ubuntu
31-May-2020 15:33:46.750 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat9
31-May-2020 15:33:46.751 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat9
31-May-2020 15:33:46.786 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.lang=ALL-UNNAMED
31-May-2020 15:33:46.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.io=ALL-UNNAMED
31-May-2020 15:33:46.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
31-May-2020 15:33:46.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat9/conf/logging.properties
31-May-2020 15:33:46.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
31-May-2020 15:33:46.788 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
31-May-2020 15:33:46.788 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
31-May-2020 15:33:46.789 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
31-May-2020 15:33:46.789 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
31-May-2020 15:33:46.789 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat9
31-May-2020 15:33:46.789 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat9
31-May-2020 15:33:46.792 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat9/temp
31-May-2020 15:33:46.792 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR based Apache Tomcat Native library which allows optimal performance in production environments was not $
31-May-2020 15:33:47.542 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
31-May-2020 15:33:47.606 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [1,342] milliseconds
31-May-2020 15:33:47.738 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
31-May-2020 15:33:47.744 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.34]
31-May-2020 15:33:47.800 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/usr/local/tomcat9/webapps/ROOT.war]
31-May-2020 15:33:47.831 INFO [main] org.apache.catalina.startup.ExpandWar.expand An expanded directory [/usr/local/tomcat9/webapps/ROOT] was found with a last modified time that did not match the associated W$
31-May-2020 15:33:54.545 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs $
31-May-2020 15:33:55.095 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat9/webapps/ROOT.war] has finished in [7,295] ms
31-May-2020 15:33:55.100 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat9/webapps/host-manager]
31-May-2020 15:33:55.166 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat9/webapps/host-manager] has finished in [66] ms
31-May-2020 15:33:55.167 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat9/webapps/docs]
31-May-2020 15:33:55.207 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat9/webapps/docs] has finished in [40] ms
31-May-2020 15:33:55.208 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat9/webapps/examples]
31-May-2020 15:33:55.581 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat9/webapps/examples] has finished in [373] ms
31-May-2020 15:33:55.581 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat9/webapps/manager]
31-May-2020 15:33:55.625 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat9/webapps/manager] has finished in [43] ms
31-May-2020 15:33:55.633 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
31-May-2020 15:33:55.667 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [8,059] milliseconds
``` | Yes, you can do that. It's nothing inherently wrong with it. However, it can make it impossible to recover from a failed allocation. If you're planning to exit the program if the allocation fails, it (usually) does not matter.
So this is perfectly fine:
```
ptr = realloc(ptr, newsize);
if(!ptr) exit(EXIT_FAILURE);
```
And so is this:
```
tmpptr = realloc(ptr, newsize);
if(!tmpptr) puts("Allocation failed. Continuing anyway.");
else ptr = tmpptr;
```
Note: It can be argued if it's ok to exit with a memory leak. In most cases, that does not matter, since the operating system will usually clean everything up upon exit. So be aware that in the first example, you will exit with a memory leak.
So if a memory leak does matter to you, even if you're exiting on failure, you can do something like this:
```
tmpptr = realloc(ptr, newsize);
if(!tmpptr) {
free(ptr);
exit(EXIT_FAILURE);
} else {
ptr = tmpptr;
}
```
However, do notice that this would only solve the problem for the `ptr` pointer. If you have a program of any complexity above simple school assignments, chances are pretty high that you have a few other allocations already going on. Keeping track of all that is possible, but far from trivial. Consider this very minimalistic example:
```
void foo() {
void *ptr1 = malloc(SIZE);
if(!ptr1) exit(EXIT_FAILURE);
bar();
free(ptr1);
}
void bar() {
void *ptr2 = malloc(SIZE);
if(!ptr2) {
// Here we should call free on ptr1 to avoid memory leaks, but how?
exit(EXIT_FAILURE);
}
// Do work
free(ptr2);
}
```
Of course, you could easily rewrite it:
```
void foo() {
void *ptr1 = malloc(SIZE);
if(!ptr1) exit(EXIT_FAILURE);
if(!bar()) exit(EXIT_FAILURE);
free(ptr1);
}
int bar() {
void *ptr2 = malloc(SIZE);
if(!ptr2) return 0;
// Do work
free(ptr2);
return 1;
}
```
In this case, it was easy, but keep in mind that this example is VERY trivial.
I would not bother unless I have a very good reason to care. It makes the code messy and is completely unnecessary on modern operating systems. Unless I'm writing code for an environment where it does matter, I would only care if I'm writing a function where I want it to be up to the caller to exit or not.
The bottom line here is that making sure that all allocated resources are freed by the program on exit can be a complete nightmare. And unless you have the need for it and are willing to spend quite a lot of time getting it right, then don't bother.
In practice, dealing with this would require you to take this into account for almost all aspects of the design. And that's very rarely a good tradeoff.
Related question: [dangers of \_exit() - memory leak?](https://stackoverflow.com/q/3720872/6699433) |
17349 | Is the following always true?
$$
\arctan(f(x))\leq f(x),\ \ \text{where}\ \ |f(x)|<1\ \ \forall x\in\mathbb{R}
$$
For example, is it correct to say that
$$
\arctan\left(\frac{\sin x}{\sqrt{x+1}}\right)\le\frac{\sin x}{\sqrt{x+1}}\ \ \forall x\in\mathbb{R},
$$
since $\arctan t=t-\frac{t^3}{3}+\frac{t^5}{5}-\dots$, where each term is not greater than the one before?
I'm confused because in the example above $\arctan$'s argument can be negative (i.e., $\sin x$ can be negative). | If $0\leq f(x) <1$, yes, of course. Since $\frac{d}{dy}\arctan y=\frac{1}{1+y^2}\leq1$ (and $\arctan(0)=0$), you'll have
$$ \arctan y \leq y, \quad\text{for }\ y\geq0. $$
Clearly, the situation is reversed if $y<0$, because both functions are negative. You can say that $|\arctan f(x)|\leq|f(x)|$ with no restrictions on $f(x)$, though. |
18415 | This question has already been asked but was never answered properly. After clearance with @Seth I am now asking it again. This will allow me to respond and possibly modify the question a lot easier. The original question can be found here:
[Map Ctrl and Alt to mouse thumb buttons](https://askubuntu.com/questions/162141/map-ctrl-and-alt-to-mouse-thumb-buttons)
---
**Issue:**
Though it is very simple to map any keystrokes to a mouse button using `xbindkeys` in conjunction with `xdotool` or `xte` it seems a lot more problematic to map a modifier key (e.g. *ALT*, *CTRL*, *SHIFT* etc.) to it.
The final soloution should allow i.a. a *CTRL* + *click* (e.g. for selecting multiple entries of a list) with just the mouse.
A couple of possible approaches to solve this can be found here at Stack Exchange as well as at other Linux related forums. But none of those work as expected as they lead to other issues and side effects.
**Notes:**
Some of the examples below involve *Guile* with *Scheme* syntax and rely on `.xbindkeysrc.scm` file whereas others rely on the `.xbindkeysrc` file with its respective syntax. I am aware that they won't work together.
Furthermore the below snippets rely on `xdotool` only but I am open to approaches involving other applications like e.g. `xte` as well - though it seems both lead to the same results and therefore I am using just `xdotool` actions here.
**Approach A:**
Updating the `.xbindkeysrc` file with:
```
"xdotool keydown ctrl"
b:8
"xdotool keyup ctrl"
release + b:8
```
That's what I initially tried but it has the side-effect that the modifier is being held and can not be released.
**Approach B:**
Updating the `.xbindkeysrc.scm` file with:
```
(xbindkey '("b:8") "xdotool keydown ctrl")
(xbindkey '(release "b:8") "xdotool keyup ctrl")
(xbindkey '("m:0x14" "b:8") "xdotool keydown ctrl")
(xbindkey '(release "m:0x14" "b:8") "xdotool keyup ctrl")
```
Found at <http://www.linuxforums.org/forum/hardware-peripherals/169773-solved-map-mouse-button-modifier-key.html> and tries to address the issue where the modifier is being held (as described at approach a).
Though it fixes that it does only work partially as it is not possible to perform other mouse clicks while the thumb button is pressed.
**Approach C:**
Updating the `.xbindkeysrc` file with:
```
"xdotool keydown ctrl"
b:8
"xdotool keyup ctrl"
release + control + b:8
```
Tried out by OP of the linked question here at askubuntu. A lot simpler and more solid as it does not involve modifier states. Nevertheless the issue remains, i.e. a *CTRL* + *click* is not possible.
It seems that `xbindkeys` itself is the problem here as it recognizes the click but won't execute it. This can be tested using `xev | grep button` and `xbindkeys -v`:
A normal mouse click as recorded by `xev` should look like:
```
state 0x10, button 1, same_screen YES
state 0x110, button 1, same_screen YES
```
As well as for the thumb button:
```
state 0x10, button 8, same_screen YES
state 0x10, button 8, same_screen YES
```
But when having the above `xbindkeys` configuration enabled it does not record anything. Though it makes sense for the thumb button as it is mapped to *CTRL* and therefore is not a mouse button anymore it is strange that *button 1* is not recorded as well. This is likely because `xbindkeys` does not execute it but itself is recognizing it:
```
Button press !
e.xbutton.button=8
e.xbutton.state=16
"xdotool keydown ctrl"
m:0x0 + b:8 (mouse)
got screen 0 for window 16d
Start program with fork+exec call
Button press !
e.xbutton.button=1
e.xbutton.state=20
Button release !
e.xbutton.button=1
e.xbutton.state=276
Button release !
e.xbutton.button=8
e.xbutton.state=20
"xdotool keyup ctrl"
Release + m:0x4 + b:8 (mouse)
got screen 0 for window 16d
Start program with fork+exec call
```
**Approach D:**
Updating the `.xbindkeysrc` file with:
```
"xdotool keydown ctrl"
b:8
"xdotool keyup ctrl"
release + control + b:8
"xdotool click 1"
b:1
```
Just too simple ... but leads to an infinite loop of clicks.
---
**UPDATE:**
In the meantime I've bought a Logitech G502 and noticed that once configured via the driver on Windows not only the profile itself is stored on the device memory but the actual keypress is done by the mouse. That in fact solved my problem on Linux!
The only other mouse I remember that was able to do that was the Razer Copperhead back in the days. But I guess there are other mice available today which can do the same. | I spent a lot of time trying to make that binding work. I eventually found a solution, which is complicated but works well and doesn't imply third party software.
I share it here hoping it will help people. Besides, I know this is not perfect in terms of security, so any constructive feedback is more than welcome.
There are solutions who are really nice, [like the one proposed here](http://www.ralf-oechsner.de/opensource/page/logitech_performance_mx), but It always suffer from the limitation of xbindkeys who grab the entire mouse, making modifers+mouse click mapping uncertain. Plus the guile based solution from the above link use ctrl+plus/ctrl+minus which isn't recognize by Gimp for example.
I figured out that what we want is a mouse button who act as a keyboard, so I used uinput, who can be accessed [via python](https://pypi.python.org/pypi/python-uinput), wrote a script that monitor /dev/my-mouse for the thumb button click and send the ctrl key to the virtual keyboard. Here are the detailed steps :
1. Make udev rules
------------------
We want the devices to be accessible (rights and location).
**For the mouse :**
```
/etc/udev/rules.d/93-mxmouse.conf.rules
------------------------------------------------------------
KERNEL=="event[0-9]*", SUBSYSTEM=="input", SUBSYSTEMS=="input",
ATTRS{name}=="Logitech Performance MX", SYMLINK+="my_mx_mouse",
GROUP="mxgrabber", MODE="640"
```
Udev will look for a device recognized by the kernel with names like event5, and I select my mouse with the name. The SYMLINK instruction assure I will find my mouse in /dev/my\_mx\_mouse. The device will be readable by a member of the group "mxgrabber".
To find information about your hardware, you should run something like
```
udevadm info -a -n /dev/input/eventX
```
**For uinput :**
```
/etc/udev/rules.d/94-mxkey.rules
----------------------------------------------------
KERNEL=="uinput", GROUP="mxgrabber", MODE="660"
```
No need for symlink, uinput will always be in `$/dev/uinput` or `$/dev/input/uinput` depending on the system you're on. Just give him the group and the rights to read AND write of course.
You need to unplug - plug your mouse, and the new link should appear in /dev. You can force udev to trigger your rules with `$udevadm trigger`
2. Activate UINPUT Module
-------------------------
```
sudo modprobe uinput
```
And to make it boot persistant :
```
/etc/modules-load.d/uinput.conf
-----------------------------------------------
uinput
```
3. Create new group
-------------------
```
sudo groupadd mxgrabber
```
Or whatever you have called your access group. Then you should add yourself to it :
```
sudo usermod -aG mxgrabber your_login
```
4. Python script
----------------
You need to install the **python-uinput library** (obviously) and the **python-evdev library**. Use pip or your distribution package.
The script is quite straightforward, you just have to identify the event.code of you button.
```
#!/usr/bin/python3.5
# -*- coding: utf-8 -*-
"""
Sort of mini driver.
Read a specific InputDevice (my_mx_mouse),
monitoring for special thumb button
Use uinput (virtual driver) to create a mini keyboard
Send ctrl keystroke on that keyboard
"""
from evdev import InputDevice, categorize, ecodes
import uinput
# Initialize keyboard, choosing used keys
ctrl_keyboard = uinput.Device([
uinput.KEY_KEYBOARD,
uinput.KEY_LEFTCTRL,
uinput.KEY_F4,
])
# Sort of initialization click (not sure if mandatory)
# ( "I'm-a-keyboard key" )
ctrl_keyboard.emit_click(uinput.KEY_KEYBOARD)
# Useful to list input devices
#for i in range(0,15):
# dev = InputDevice('/dev/input/event{}'.format(i))
# print(dev)
# Declare device patch.
# I made a udev rule to assure it's always the same name
dev = InputDevice('/dev/my_mx_mouse')
#print(dev)
ctrlkey_on = False
# Infinite monitoring loop
for event in dev.read_loop():
# My thumb button code (use "print(event)" to find)
if event.code == 280 :
# Button status, 1 is down, 0 is up
if event.value == 1:
ctrl_keyboard.emit(uinput.KEY_LEFTCTRL, 1)
ctrlkey_on = True
elif event.value == 0:
ctrl_keyboard.emit(uinput.KEY_LEFTCTRL, 0)
ctrlkey_on = False
```
5. Enjoy !
----------
All you need now is make your python file executable, and ask your desktop manager to load the file at startup. Maybe also a glass of wine to celebrate the good work !
6. Extra for free
-----------------
I use xbindkeys for additional behavior. For instance, the following configuration may be nice if you have a mouse with wheel side clicks :
```
~/.xbindkeysrc
---------------------------------------------
# Navigate between tabs with side wheel buttons
"xdotool key ctrl+Tab"
b:7
"xdotool key ctrl+shift+Tab"
b:6
# Close tab with ctrl + right click
# --clearmodifiers ensure that ctrl state will be
# restored if button is still pressed
"xdotool key --clearmodifiers ctrl+F4"
control+b:3
```
For this last combinaison to work, **you must disable the button you configured for the python script**, otherwise it will still be grabed by xbindkeys. Only the Ctrl key must remain :
```
~/.Xmodmap
-------------------------------------------
! Disable button 13
! Is mapped to ctrl with uinput and python script
pointer = 1 2 3 4 5 6 7 8 9 10 11 12 0 14 15
```
Reload with `$ xmodmap ~/.Xmodmap`
7. Conclusion
-------------
As I said in the beginning, I'm not perfectly happy with the fact that I have to give myself the wrights to write to /dev/uinput, even if it's thought the "mxgrabber" group. I'm sure there is a safer way of doing that, but I don't know how.
**On the bright side, it works really, really well.** Any combinaison of keyboard or mouse key how works with the Ctrl button of the keyboard now works with the one of the mouse !! |
18430 | I'm looking for an answer to the following question. (An answer to a slightly different question would be good as well, since it could be useful for the same purpose.)
>
> Given a set *C* consisting of *n*
> subsets of {1, 2, ..., *n*}, each of
> size *k*, does there exist some small A
> $\subset$ {1, 2, ..., *n*} such that
> *A* intersects all (or all except a small number) of the sets in *C*?
>
>
>
Preferably, "small" will be $\epsilon$*n* where $\epsilon$ can be made arbitrarily small, as long as *n* and *k* are sufficiently large.
I'm hoping the answer is yes. Here is why some such *A* might exist: on average, each element of {1, 2, ..., *n*} intersects *k* sets in *C*, so one might hope to make do with *A* of size on the order of *n*/*k*.
This smells a bit like some version of Ramsey's theorem to me, or like the Erdős–Ko–Rado theorem, but it doesn't (as far as I can tell) follow directly from either. | I believe, reading the abstract, that the paper "Transversal numbers of uniform hypergraphs", Graphs and Combinatorics 6, no. 1, 1990 by Noga Alon answers your question in the affirmative, for some definition of ``your question''. Namely, the worst case is that $A$ has to have size about $2\log k/k$ times $n$, and this multiplier tends to zero as $k$ tends to infinity.
Here's a free copy of the paper.
<http://www.cs.tau.ac.il/~nogaa/PDFS/Publications/Transversal%20numbers%20of%20uniform%20hypergraphs.pdf>
I'm certainly no expert on these matters and my advice would be to look at this and related literature on transversals of hypergraphs. Your collection $C$ of sets is the same thing as a $k$-uniform hypergraph, and the property that you want from $A$ is equivalent to it being a transversal.
Reading Alon's paper a little more I see that what you want is the easier direction of his argument (which gives a tight dependence on $k$). The basic idea is to choose your transversal randomly by picking elements of $\{1,\dots,n\}$ with an appropriate probability $p$. That way, with high probability, you'll hit most of the sets from your collection $C$, and then you just add in one extra element of $A$ for each un-hit set from $C$.
Reading a little further still, I see that the upper bound is probabilistic as well: that is, to make a collection $C$ which is ``bad'', the best plan is to choose sets in $C$ at random from amongst all $k$-element subsets of $\{1,\dots,n\}$.
There's probably literature on your ``almost transveral'' question, but I'll leave someone else to find it. My guess is that random does best in both directions there too. |
18505 | (Similar question related to SQL Server : [**SO Link**](https://stackoverflow.com/questions/2624713/how-do-i-insert-multiple-rows-without-repeating-the-insert-into-dbo-blah-part))
I know in Sql Server 2008 or above, you can insert multiple rows doing the following:
```
INSERT INTO MyTable (Name, ID)
VALUES ('First',1), ('Second',2), ('Third',3)
```
However, it seems like this syntax DOES NOT work in Sybase Adaptive Server Enterprise, since this gives me an error..
Anyone know the syntax in Sybase that achieves the same thing?
Sybase ASE is based on Transact SQL..
Thanks | Sybase doen't have insert syntax as SQL Server. What's wrong with showed below classic method?
```
INSERT INTO MyTable (Name, ID) VALUES ('First',1)
INSERT INTO MyTable (Name, ID) VALUES ('Second',2)
INSERT INTO MyTable (Name, ID) VALUES ('Third',3)
go
``` |
19047 | I want to open image / video in modal popup so I have a button on click I can show image. but when I have both image and video its difficult that how can I check that tags in jQuery.
My html code for zoom image in popup
```
<div>
<img src="image/zoom-icon.png" id="zoomImg" >
<div class="bigthumb">
<img src="image/main_product_img.png" id="myImg" width="350px" height="350px">
<video class="no-display" width="350px" height="350px" controls>
<source src="image/movie.mp4" type="video/mp4">
</video>
</div>
</div>
<div class="modal fade" id="imagemodal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-body">
<button type="button" class="close" data-dismiss="modal"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button>
<img src="" class="imagepreview" style="width: 100%;" >
<video class="videopreview no-display" class="imagepreview" style="width: 100%;" controls>
<source src="" type="video/mp4">
</video>
</div>
</div>
</div>
</div>
```
jQuery\_file code is here
```
<script type="text/javascript">
$('#zoomImg').on('click', function() {
console.log($(this));
if ( ---HERE-- ) {
$('.imagepreview').attr('src', $('.bigthumb').find('img').attr('src'));
$('#imagemodal').modal('show');
} else {
$('.videopreview source').attr('src', $('.bigthumb').find('video source').attr('src'));
$('.videopreview').show();
$('#imagemodal').modal('show');
}
});
</script>
```
on above script i want to check in if condition that is img tage / video tage | You can use [`.is()`](http://api.jquery.com/is/)
>
> Check the current matched set of elements against a selector, element, or jQuery object and return true if at least one of these elements matches the given arguments
>
>
>
```
if($(this).is('img')){
//Do something
}
``` |
19316 | I want to remove an index in a table whose access in php never uses the indexed column. Index takes up extra space and I am trying to trim it. It's a table of phone numbers. A phone number is linked to a user profile's id. So it has 3 columns. `id` (index), `number` and `person`. I was wondering if removing the index will affect the queries that use `number` or `person` in the where clause. My gut feeling is that it shouldn't but I am afraid computer science doesn't work on gut feelings. The data is accessed via `join`s. For example...
```
SELECT *
FROM people ... LEFT JOIN
phoneNumbers
ON people.id = phoneNumbers.person
```
Edit: Apparently no one seems to be able to answer the question in the title. | In the case you show, only the `person` column would benefit from an index.
Indexes help in basically four cases:
* **Row restriction**, that is finding the rows by value instead of examining every row in the table.
+ **Joining** is a subset of row restriction, i.e. each distinct value in the first table looks up matching rows in the second table. Indexing a column that is referenced in the `ON` clause is done in the same way you would index a column referenced in the `WHERE` clause.
* **Sorting**, to retrieve rows in index order instead of having to sort the result set as an additional step.
* **Distinct and Group By**, to scan each distinct value in an index.
* **Covering index**, that is when the query needs only the columns that are found in the index.
In the case of InnoDB, every table is treated as an *index-organized table* based on its primary key, and we should take advantage of this because primary key lookups are very efficient. So if you can redefine a primary key on your `phoneNumbers.person` column (in part), that would be best. |
19969 | So yesterday I was developing some sort of offline functionality. Therefore, I added an ApiService that returns Observables.
Currently, I fetch my access\_token for jwt-Authentication and then use this token to generate Headers for my API-Request. After a successful request, I save the result to my storage. This works fine. Now here is the problem I want to check for an unsuccessful request (e.g. servers are down, app is offline) and then return my stored result from storage. But I can't get it to work.
Here is my code:
```
getJobs(): Observable<any> {
this.auth.checkToken()
return from(this.storage.get(ACCESS_TOKEN)).pipe(
switchMap(token => {
let options = this.auth.addToken(token)
return this.http.get(API_URL + "jobs", options)
}),
map(res => {
if (res) {
this.storage.set(JOBS, res)
return res
} else {
return from(this.storage.get(JOBS))
}
}),
catchError(() => {
return from(this.storage.get(JOBS))
})
)
}
```
Further investigations have shown that after the server or the app is offline neither the map() nor the catchError() functions were executed.
**UPDATE:**
The solution provided by DJ House is correct. My Code works perfectly in my browser but if I build my app with `ionic cordova build android` it gets stuck after the `this.http.get(...)` So it's clearly and issue with cordova
**SOLUTION:**
Wow! Something magical happened! I've found out that the catchError method gets called BUT after almost 2 Minutes, which is way to slow... So I will implement a timeout.
Thanks
flixoflax | it helped me
```
yum update -y libstdc++.x86_64
yum install libstdc++.i686
``` |
20730 | For example, this is the code that I am using:
```
String commandString = "UPDATE Members SET UserName = @newName , AdminLevel = @userLevel WHERE UserID = @userid";
using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["sqlconnectionstring"].ConnectionString))
{
SqlCommand cmd = new SqlCommand(commandString, conn);
cmd.Parameters.Add("@newName", newName);
cmd.Parameters.Add("@userLevel", userLevel);
cmd.Parameters.Add("@userid", userid);
conn.Open();
cmd.ExecuteReader();
Reader.Close();
}
``` | That code looks fine. Parameterisation is the way to go, as opposed to concatenating user-supplied values in an adhoc SQL statement which can open you up to sql injection attacks. This can also help with execution plan reuse.
The only thing I'd add, is I prefer to explicitly define the datatype and sizes of the parameters. For example, if you don't then, as an example, all string values will get passed in to the database as NVARCHAR instead of VARCHAR. Hence I like to be explicit. |
21229 | I am having issues trying to run Virtual Box; which are the "`appropiate headers"`??
```
WARNING: The character device /dev/vboxdrv does not exist.
Please install the virtualbox-dkms package and the appropriate
headers, most likely linux-headers-generic.
You will not be able to start VMs until this problem is fixed.
```
I have installed `virtualbox-dkms` package, but I have no idea about the headers.
Also when trying to run a VM
>
> Kernel driver not installed (rc=-1908)
>
>
> The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please install virtualbox-dkms package and load the kernel module by executing
>
>
>
> ```
> 'modprobe vboxdrv'
>
> ```
>
> as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary.
>
>
> where: `suplibOsInit` what: `3 VERR_VM_DRIVER_NOT_INSTALLED (-1908)` - The support driver is not installed. On Linux, open returned ENOENT.
>
>
>
I cannot run `modprobe vboxdrv` (yes, I run it with `sudo`)
```
modprobe: ERROR: could not insert 'vboxdrv': Operation not permitted
```
I think this is a common question but cannot find answers for Ubuntu 16.04, just outdated ones.
**UPDATE**
`linux-headers-generic` is installed but it still doesn't work. | Did you try:
```
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install --reinstall virtualbox-dkms
```
Hope this will help. |
21486 | I tried to start the server from MySQL application that is startup/shutdown
there iam getting an error MySQL cannot connect to localhost(10061) code 2003
After that I tried to connect it from cmd line ,but that is also not working
```none
C:\> "C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld"
mysqld: Can't change dir to 'C:\Program Files\MySQL\MySQL Server 8.0\data\' (OS errno 2 - No such file or directory)
2018-09-18T07:03:05.833751Z 0 [System] [MY-010116] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld (mysqld 8.0.12) starting as process 1472
2018-09-18T07:03:05.841119Z 0 [Warning] [MY-010091] [Server] Can't create test file C:\Program Files\MySQL\MySQL Server 8.0\data\LAPTOP-7VFDP9Q8.lower-test
2018-09-18T07:03:05.841308Z 0 [Warning] [MY-010091] [Server] Can't create test file C:\Program Files\MySQL\MySQL Server 8.0\data\LAPTOP-7VFDP9Q8.lower-test
2018-09-18T07:03:05.842165Z 0 [ERROR] [MY-010172] [Server] failed to set datadir to C:\Program Files\MySQL\MySQL Server 8.0\data\
2018-09-18T07:03:05.856689Z 0 [ERROR] [MY-010119] [Server] Aborting
2018-09-18T07:03:05.858721Z 0 [System] [MY-010910] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld: Shutdown complete (mysqld 8.0.12) MySQL Community Server - GPL.
``` | No such file or directory
It's good hint. Check this path 'C:\Program Files\MySQL\MySQL Server 8.0\data\'
Maybe it not exists. If it's true - just create folder.
If folder already created - check datadir in your my.ini
Maybe you need quotes ("")
For example - check my my.ini [![enter image description here](https://i.stack.imgur.com/20jOw.png)](https://i.stack.imgur.com/20jOw.png) |
21745 | Using Boost.Python, how can I define a module inside another (or rather, as if it were located in a folder)? | Solution (without altering the indexes on my tables):
```
SELECT df.*,
(
SELECT dfa.file_archive_id
FROM dca_file_archive dfa
WHERE df.file_id = dfa.file_id
ORDER BY dfa.file_archive_version desc LIMIT 1
) as file_archive_id,
(
SELECT dfa.file_archive_version
FROM dca_file_archive dfa
WHERE df.file_id = dfa.file_id
ORDER BY dfa.file_archive_version desc LIMIT 1
) as file_archive_version
FROM dca_file df
```
Both tables having ~16k rows, this statement takes 0.9 seconds to perform, which is 120x faster than the first join solution.
I know this is not the finest you can do with SQL |
21803 | I have the following DataFrame:
```
Stint Year ID Data1 Data2 Team
1 2010 A 10 1 SFN
1 2011 A 10 1 SFN
1 2013 A 10 1 SFN
2 2013 A 10 1 ATL
1 1922 B 10 1 ARI
1 1923 B 10 1 ARI
1 1924 B 10 1 ARI
```
I'm trying to return a new DataFrame which sums up values in the Data1 and Data2 columns for identical years. I would like the DataFrame above to ultimately look like this:
```
Year ID Data1 Data2
2010 A 10 1
2011 A 10 1
2013 A 20 2
1922 B 10 1
1923 B 10 1
1924 B 10 1
```
I've messed around with some groupby functions, but I'm having trouble getting the proper DataFrame structure.
Thanks! | ### `groupby` with `as_index=False`
Will not include grouped columns in a new index
```
df.groupby(['Year', 'ID'], as_index=False)[['Data1', 'Data2']].sum()
Year ID Data1 Data2
0 1922 B 10 1
1 1923 B 10 1
2 1924 B 10 1
3 2010 A 10 1
4 2011 A 10 1
5 2013 A 20 2
``` |
21813 | I have this query:
```
SELECT queue.first_dealer, queue.second_dealer, queue.status
FROM queue
WHERE queue.status = 3
AND queue.first_dealer = $user_id
OR queue.second_dealer = $user_id
AND queue.status = 3
ORDER BY id DESC
LIMIT 5
```
But sometimes the first\_dealer or the second\_dealer can be the same, I want to skip to the next records if they are the same. | this is the Line you have to comment out:
```
var JTooltips = new Tips($$('.hasTip'), { maxTitleChars: 50....
```
Joomla 2.5 /libraries/joomla/html/\*/behavior.php
Line xxx comment it out
Joomla 3.x /libraries/cms/html/behavior.php
Line 287 comment it out |
22457 | I've noticed a new DropArea component in Qt5. I'm trying to drag a file from Finder (Mac) but only onEntered method is called.
```
import QtQuick 2.0
Rectangle {
id: background;
color: "white";
width: 300;
height: 300;
DropArea {
id: dropArea;
anchors.fill: parent;
onEntered: {
background.color = "gray";
drag.accept (Qt.CopyAction);
console.log("onEntered");
}
onDropped: {
console.log ("onDropped");
}
onExited: {
bckground.color = "white";
console.log ("onExited");
}
}
}
```
and here is window creation code:
```
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QQuickView qmlView;
qmlView.setGeometry(0, 200, 600, 400);
qmlView.setResizeMode (QQuickView::SizeRootObjectToView);
qmlView.setSource(QUrl::fromLocalFile("/Users/ivann/Projects/QtGuiTestApp/testView.qml"));
qmlView.show();
return a.exec();
}
```
Am I missing something? | Seems to be a Mac-specific issue (it wokrs as expected on Linux at least). Filled a bugreport to Nokia: <https://bugreports.qt.io/browse/QTBUG-27125> |
22639 | The schema:
(psuedocode)
I have a bean, called BaseEntity...
```
@Entity
class BaseEntity {
@OneToMany @CascadeType.ALL
List [Property] properties;
//the use angled braces ommited for the stackoverflow editor to show up properly
}
```
Property is another bean...
```
@Entity
class Property {
@ManyToOne
Category category;
@OneToOne
Value value;
}
```
Value is actually an abstract class with Inheritence.SingleTable and subclasses like NumericalValue and DateValue, etc, as well as (in the abstract Value class) a @ManyToOne BaseType type.
### The goal:
I am trying to write a query that selects BaseEntity objects that have a Property that has a Category of a certain name, and select several of them, getting objects that have any of the given properties and getting null in fields that don't exist.
### The attempt:
```
select entity.id as id, foo as foo, bar as bar
from BaseEntity entity, Value foo, Value bar
where foo in (select p.value from Property p where p in elements(entity.properties) and p.category.name = 'FOO')
or bar in (select p.value from Property p where p in elements(entity.properties) and p.category.name = 'BAR')
```
This query DOES run. Currently there is one BaseEntity in the database that matches and I get it many times over with with the correct result for foo, which it does contain, but that same entitiy over and over again with many values in the bar field.
Also, it takes like FIVE MINUTES to run and holds up everyone else using the database.
Ideas:
Of course I have considered just using some kind of distinct, but that doesn't address the extreme time it takes to run, and I just don't quite understand what's going on.
I was hoping you, my peers, could suggest a better query approach. Thank you so much!
---
I would have commented, but this code is too long for the comment box... I ran a query, this one is not so long that it hangs, but any more joins and it does hang instead of completing... this query takes MINUTES to execute.
I ran the query from my code and turned on show\_sql in the hibernate properties.
select baseentity0\_.entityId as col\_0\_0\_, property2\_.value\_valueId as col\_1\_0\_, property4\_.value\_valueId as col\_2\_0\_, property6\_.value\_valueId as col\_3\_0\_, property8\_.value\_valueId as col\_4\_0\_, property10\_.value\_valueId as col\_5\_0\_, value11\_.valueId as valueId9\_0\_, value12\_.valueId as valueId9\_1\_, value13\_.valueId as valueId9\_2\_, value14\_.valueId as valueId9\_3\_, value15\_.valueId as valueId9\_4\_, value11\_.type\_typeId as type6\_9\_0\_, value11\_.numericalValue as numerica3\_9\_0\_, value11\_.textValue as textValue9\_0\_, value11\_.dateValue as dateValue9\_0\_, value11\_.value\_entityId as value7\_9\_0\_, value11\_.DTYPE as DTYPE9\_0\_, value12\_.type\_typeId as type6\_9\_1\_, value12\_.numericalValue as numerica3\_9\_1\_, value12\_.textValue as textValue9\_1\_, value12\_.dateValue as dateValue9\_1\_, value12\_.value\_entityId as value7\_9\_1\_, value12\_.DTYPE as DTYPE9\_1\_, value13\_.type\_typeId as type6\_9\_2\_, value13\_.numericalValue as numerica3\_9\_2\_, value13\_.textValue as textValue9\_2\_, value13\_.dateValue as dateValue9\_2\_, value13\_.value\_entityId as value7\_9\_2\_, value13\_.DTYPE as DTYPE9\_2\_, value14\_.type\_typeId as type6\_9\_3\_, value14\_.numericalValue as numerica3\_9\_3\_, value14\_.textValue as textValue9\_3\_, value14\_.dateValue as dateValue9\_3\_, value14\_.value\_entityId as value7\_9\_3\_, value14\_.DTYPE as DTYPE9\_3\_, value15\_.type\_typeId as type6\_9\_4\_, value15\_.numericalValue as numerica3\_9\_4\_, value15\_.textValue as textValue9\_4\_, value15\_.dateValue as dateValue9\_4\_, value15\_.value\_entityId as value7\_9\_4\_, value15\_.DTYPE as DTYPE9\_4\_ from BaseEntity baseentity0\_ inner join BaseEntity\_Property properties1\_ on baseentity0\_.entityId=properties1\_.BaseEntity\_entityId inner join Property property2\_ on properties1\_.properties\_propertyId=property2\_.propertyId inner join Value value11\_ on property2\_.value\_valueId=value11\_.valueId inner join BaseEntity\_Property properties3\_ on baseentity0\_.entityId=properties3\_.BaseEntity\_entityId inner join Property property4\_ on properties3\_.properties\_propertyId=property4\_.propertyId inner join Value value12\_ on property4\_.value\_valueId=value12\_.valueId inner join BaseEntity\_Property properties5\_ on baseentity0\_.entityId=properties5\_.BaseEntity\_entityId inner join Property property6\_ on properties5\_.properties\_propertyId=property6\_.propertyId inner join Value value13\_ on property6\_.value\_valueId=value13\_.valueId inner join BaseEntity\_Property properties7\_ on baseentity0\_.entityId=properties7\_.BaseEntity\_entityId inner join Property property8\_ on properties7\_.properties\_propertyId=property8\_.propertyId inner join Value value14\_ on property8\_.value\_valueId=value14\_.valueId inner join BaseEntity\_Property properties9\_ on baseentity0\_.entityId=properties9\_.BaseEntity\_entityId inner join Property property10\_ on properties9\_.properties\_propertyId=property10\_.propertyId inner join Value value15\_ on property10\_.value\_valueId=value15\_.valueId, Category category16\_, Category category17\_, Category category18\_, Category category19\_, Category category20\_ where property2\_.category\_categoryId=category16\_.categoryId and property4\_.category\_categoryId=category17\_.categoryId and property6\_.category\_categoryId=category18\_.categoryId and property8\_.category\_categoryId=category19\_.categoryId and property10\_.category\_categoryId=category20\_.categoryId and category16\_.name='Sample Name / Strain' and category17\_.name='Item #' and category18\_.name='THC\_Result' and category19\_.name='CBD\_Result' and category20\_.name='CBN\_Result'
---
Well, I wanted to follow up and say that what worked wonderfully with two join statements is absurdly sluggish with five.
`select
entity.id as entityId,
strain.value as name,
item.value as itemNum,
thc.value as THC,
cbd.value as CBD,
cbn.value as CBN`
from BaseEntity as entity
join entity.properties as strain
join entity.properties as item
join entity.properties as thc
join entity.properties as cbd
join entity.properties as cbn
where strain.category.name = 'Sample Name / Strain'
and item.category.name = 'Item #'
and thc.category.name = 'THC\_Result'
and cbd.category.name = 'CBD\_Result'
and cbn.category.name = 'CBN\_Result'
Any suggestions on a better way to do it that will be faster with my stupid schema? | The performance problem looks like it is in your sub selects.
What about breaking it up a bit
```
select entity from BaseEntity as entity join entity.properties as property where
property.category.name in ( 'FOO','BAR' )
```
This would get you the list of Base Entities with a property of FOO or BAR then if you wanted to limit the collections of baseEntity.properties use a filter or requery to get the properties.
```
from properity where property.category.name in ( 'FOO', 'BAR' )
and property.baseEntity.priKey = :priKey
``` |
22798 | I want to logout by Angular using POST method, here is my code:
```
logout() {
const url = 'http://localhost:8181/user/logout';
const xToken = localStorage.getItem('xAuthToken');
const basicHeader = 'Basic ' + localStorage.getItem('credentials');
const headers = new Headers({
'x-auth-token': xToken,
'Authorization': basicHeader
});
// return this.http.get(url, { headers: headers }); // This will work
return this.http.post(url, { headers: headers }); // This will generate error
}
```
And this my backend:
```
@RequestMapping("/user/logout")
public ResponseEntity<String> logout(){
SecurityContextHolder.clearContext();
return new ResponseEntity<String>("Logout Successfully!", HttpStatus.OK);
}
```
The weird thing is the code above work with `this.http.get` but will generate below error with `this.http.post`. And here is error with `this.http.post`:
```
POST http://localhost:8181/user/logout 401
```
If I modify code using HttpClient, like this:
```
import { Injectable } from '@angular/core';
import { Http, Headers } from '@angular/http';
import { HttpClient } from '@angular/common/http';
@Injectable({
providedIn: 'root'
})
export class LoginService {
constructor(private http: HttpClient) {
}
sendCredential(username: string, password: string) {
let url = "http://localhost:8181/token";
let encodedCredentials = btoa(username + ":" + password);// encode in base64 to send a token
let basicHeader = "Basic " + encodedCredentials;
let headers = new Headers({
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': basicHeader
})
// send credential method when login component
return this.http.get(url, { headers: headers }); // Error at this line
}
checkSession() {
const url = 'http://localhost:8181/checkSession';
const xToken = localStorage.getItem('xAuthToken');
const basicHeader = 'Basic ' + localStorage.getItem('credentials');
const headers = new Headers({
'x-auth-token': xToken,
'Authorization': basicHeader
});
return this.http.get(url, { headers: headers }); // Error at this line
}
logout() {
const url = 'http://localhost:8181/user/logout';
const xToken = localStorage.getItem('xAuthToken');
const basicHeader = 'Basic ' + localStorage.getItem('credentials');
const headers = new Headers({
'x-auth-token': xToken,
'Authorization': basicHeader
});
return this.http.get(url, { headers: headers }); // Error at this line
}
}
```
Then I get error message:
```
(property) headers?: HttpHeaders | {
[header: string]: string | string[];
}
Type 'Headers' is not assignable to type 'HttpHeaders | { [header: string]: string | string[]; }'.
Type 'Headers' is not assignable to type '{ [header: string]: string | string[]; }'.
Index signature is missing in type 'Headers'.ts(2322)
http.d.ts(1086, 9): The expected type comes from property 'headers' which is declared here on type '{ headers?: HttpHeaders | { [header: string]: string | string[]; }; observe?: "body"; params?: HttpParams | { [param: string]: string | string[]; }; reportProgress?: boolean; responseType?: "json"; withCredentials?: boolean; }'
```
at line `return this.http.get(url, { headers: headers });`
Anyone know how to fix it? | To read the data you should do something like this :
```
from os import listdir
from os.path import isfile, join
import re
import matplotlib.pyplot as plt
mypath = '.' # edit with the path to your data
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
x = []
y = []
for file in files:
label = file.split('_')[0] # assuming your img is named like this "eight_1.png" you want to get the label "eight"
y.append(label)
img = plt.imread(file)
x.append(img)
```
Then you will need to manipulate a little bit x and y before give it to scikit learn but you should be fine. |
23295 | What is typically regarded as more Pythonic/better/faster to use, the reverse method or the reversed built-in function?
Both in action:
```
_list = list(xrange(4))
print _list
rlist = list(reversed(_list))
print rlist
_list.reverse()
print _list
``` | Without knowing real stats about performance, `_list.reverse()` modifies the list itself, whereas `reversed(_list)` returns an iterator ready to traverse the list in reversed order. That's a big difference itself.
If that's not a problem, `object.reverse()` seems more readable to me, but maybe you have specific speed requirements. And if `reverse()` does not belong to 80% of software that's consuming resources, I wouldn't bother (as a general rule of thumb). |
23325 | I've been using Mint for 2 years but am still new to Linux. I had the default QGIS version (2.0 I think) installed and it worked great, but I tried to upgrade and lost even that. After experimenting I finally got rid of these 2 dependency problems
cpp-4.8 : Depends: gcc-4.8-base (= 4.8.4-2ubuntu1~14.04.3) but 4.8.4-2ubuntu1~14.04.4 is to be installed
gcc-4.8 : Depends: gcc-4.8-base (= 4.8.4-2ubuntu1~14.04.3) but 4.8.4-2ubuntu1~14.04.4 is to be installed
but when I use sudo apt-get install qgis
I get the error:
```
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
qgis : Depends: qgis-providers (= 2.14.22+dfsg-1~trusty1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
```
I don't know how to install 2.14.22+dfsg-1~trusty1, if that's what I'm supposed to do. Since I got rid of the cpp and gcc packages I don't see anything else still broken in syaptic package manager.
I'm lost. My systen is: Thinkpad T410 i5 2.4 Ghz 4GB
Dual boot Mint Cinnamon 17.3 64-bit, Windows 7
kernel 3.13.0-37-generic x86\_64 (64 bit)
I'm really not sure where/how to add additional information that Andre requested so I'll put it here. After adding the additional source, key, and trying to install qgis I got: sudo apt-get install qgis
[sudo] password for wayne:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
qgis : Depends: libqgis-analysis2.18.17 but it is not going to be installed
Depends: libqgis-app2.18.17 but it is not going to be installed
Depends: libqgis-core2.18.17 but it is not going to be installed
Depends: libqgis-gui2.18.17 but it is not going to be installed
Depends: libqgis-networkanalysis2.18.17 but it is not going to be installed
Depends: python-qgis (= 1:2.18.17+20trusty-ubuntugis) but it is not going to be installed
Depends: qgis-providers (= 1:2.18.17+20trusty-ubuntugis) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
**When I ran** `sudo apt-get install qgis-providers` I got:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
qgis-providers : Depends: libqgis-core2.18.17 but it is not going to be installed
Depends: libqgis-gui2.18.17 but it is not going to be installed
Depends: libspatialite5 (>= 4.0.0) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
**Then** just now I tried `sudo aptitude install qgis`
[sudo] password for wayne:
The following NEW packages will be installed:
libfcgi0ldbl{a} libgeos-3.4.2{a} libgeos-c1{a} libgsl0ldbl{a}
libjs-jquery{a} libjs-leaflet{a} libqgis-analysis2.18.17{a}
libqgis-app2.18.17{a} libqgis-core2.18.17{a} libqgis-customwidgets{a}
libqgis-gui2.18.17{a} libqgis-networkanalysis2.18.17{a}
libqgis-server2.18.17{a} libqgispython2.18.17{a} libqscintilla2-11{a}
libqscintilla2-l10n{a} libqwt5-qt4{a} libspatialindex3{a}
libspatialite5{a} python-dateutil{a} python-jinja2{a}
python-matplotlib{a} python-matplotlib-data{a} python-psycopg2{a}
python-pygments{a} python-pyparsing{a} python-pyproj{a}
python-pyspatialite{a} python-qgis{a} python-qgis-common{a}
python-qscintilla2{a} python-qt4-sql{a} python-shapely{a} python-tz{a}
qgis qgis-common{a} qgis-providers{a} qgis-providers-common{a}
qt4-designer{a}
The following packages are RECOMMENDED but will NOT be installed:
javascript-common liblwgeom-dev libqt4-dev python-egenix-mxdatetime
python-tk qgis-plugin-grass qgis-provider-grass
0 packages upgraded, 39 newly installed, 0 to remove and 22 not upgraded.
Need to get 64.2 MB of archives. After unpacking 245 MB will be used.
The following packages have unmet dependencies:
libgeos-c1v5 : Breaks: libgeos-c1 (< 3.4.2-8~) but 3.4.2-4ubuntu1 is to be installed.
The following actions will resolve these dependencies:
```
Keep the following packages at their current version:
```
1) libgeos-c1 [Not Installed]
2) libqgis-analysis2.18.17 [Not Installed]
3) libqgis-app2.18.17 [Not Installed]
4) libqgis-core2.18.17 [Not Installed]
5) libqgis-customwidgets [Not Installed]
6) libqgis-gui2.18.17 [Not Installed]
7) libqgis-networkanalysis2.18.17 [Not Installed]
8) libqgis-server2.18.17 [Not Installed]
9) libqgispython2.18.17 [Not Installed]
10) libspatialite5 [Not Installed]
11) python-pyspatialite [Not Installed]
12) python-qgis [Not Installed]
13) python-qgis-common [Not Installed]
14) python-shapely [Not Installed]
15) qgis [Not Installed]
16) qgis-providers [Not Installed]
Accept this solution? [Y/n/q/?]
Sounds tempting to accept, but I will wait for advice
OK, so I said yes and got this:
>
> No packages will be installed, upgraded, or removed. 0 packages
> upgraded, 0 newly installed, 0 to remove and 22 not upgraded. Need to
> get 0 B of archives. After unpacking 0 B will be used.
>
>
>
Then I reinstalled the unstable ppa, updated the cache and got:
>
> W: Failed to fetch <http://packages.domain.com/dists/trusty/InRelease>
>
>
> W: Failed to fetch <http://packages.domain.com/dists/trusty/Release.gpg>
> Unable to connect to packages.domain.com:http:
>
>
> W: Some index files failed to download. They have been ignored, or old
> ones used instead.
>
>
>
I have no idea where
>
> <http://packages.domain.com/dists/trusty/InRelease>
>
>
>
came from. But I got rid of it, and Halleluja! I am back where I started before I tried to upgrade. I have 2.18 back working again. I think I'll just be happy with 2.18. I dont want to go through this again, and there's nothing I need in later versions. | Strange. I just updated from 2.14.11 to 2.18.17 on Mint 17.3 using `https://qgis.org/ubuntugis-ltr trusty main` as an additional repo without errors.
Disabling the qgis.org repo, and using only the ubuntugis unstable ppa throws the same error as you got. Seems to be a packaging error.
The dependent package is available for xenial and newer from the main ubuntu repo, not available on my Mint 17.3 (trusty), and not needed.
---
**UPDATE**
The dev's have fixed your problem with version `2.14.22+dfsg-1~trusty3`. |
23882 | I want to calculate $\mathbb{E} \left[\left(\int\_0^tB\_s\text{d}B\_s\right)^3\right]$ where $B\_t$ is a standard Brownian motion. Using Ito's formula for $f:\mathbb{R}\rightarrow\mathbb{R}$ with $f(x)=x^2$ we can find that $\int\_0^tB\_s\text{d}B\_s=\dfrac{B^2\_t}{2}-\dfrac{t}{2}$.
Then:
$\mathbb{E} \left[\left(\int\_0^tB\_s\text{d}B\_s\right)^3\right]=\frac{1}{8}\mathbb{E}\left[\left(B^2\_t-t\right)^3\right]$ and $\mathbb{E}[(B^2\_t-t)^3]=\mathbb{E}[B^6\_t-3tB^4\_t+3t^2B^2\_t-t^3]=15t^3-9t^3+3t^3-t^3=8t^3$
However, I have seen in another question that $\mathbb{E} \left[\left(\int\_0^tB\_s\text{d}B\_s\right)^3\right]=0$. Am I doing something wrong? | No, I think you are right. The formula
$$\int\_0^t B\_s \, dB\_s = \frac{1}{2} (B\_t^2-t)$$
can be found in almost all books about SDE's. And you could also calculate the given moments using that
$$B\_t^2 -t = t \cdot \bigg( X\_t^2-1 \bigg)$$
where $X\_t := \frac{B\_t}{\sqrt{t}}$. $X\_t^2$ is $\chi\_1^2$-distributed and the third centered moment of a $\chi\_1^2$-random variable is equal to 8 (see [here](http://mathworld.wolfram.com/Chi-SquaredDistribution.html).) Hence
$$\mathbb{E}((B\_t^2-t)^3)=t^3 \cdot \mathbb{E}((X\_t^2-1)^3)=8t^3$$
---
Remark: If you consider a process $X\_t := \int\_0^t \sigma(s) \, dB\_s$ where $\sigma$ is a deterministic(!) coefficient, then $X\_t$ is a centered normal random variable and in this case $\mathbb{E}(X\_t^3)=0$ holds. |
24132 | I need to implement regex validation for value that will be used in my server side to get data where certain timestamp is older (smaller) than `now() - interval 'myValue'`.
pSQL interval function is explained [here](https://www.postgresql.org/docs/9.1/functions-datetime.html), and in short it can have values like `2 days`,`3 years`,`12 hours`, but also you can nest more different values like `2 days 6 hours 30 minutes` etc.
I currently have a regex `/^\d+\s(seconds?|minutes?|hours?|days?|weeks?|months?|years?)$/i` that accepts only one value (e.g. `2 days`), but can't figure out how to allow multiple values, and set a rule that a certain string from this group can only be repeated once or not at all.
This regex `/^\d+\s(seconds?|minutes?|hours?|days?|weeks?|months?|years?)(\s\d+\s(seconds?|minutes?|hours?|days?|weeks?|months?|years?))*$/i` allows nesting but also allows repetition of values e.g. `2 days 12 hours 6 hours 2 minutes` which will result in a fatal error in pSQL query.
I tried restricting repetition of values in this group with `\1` and `{0,1}` combination of regex operators but I just can't nail it precisely enough.
NOTE: Regex is unfortunately only way I can validate this value, since I don't have access to server-side controller which receives this value nor do I have access to client-side frontend of this form. I can't just throw exceptions or skip query because it is a part of important cron-job, and must be stable at all time.
(All I have access to is json schema of this value, and therefore can only define regex pattern for it)
Any help is appreciated, thanks. | You can use
```sql
^(?!.*(second|minute|hour|day|week|month|year).*\1)\d+\s+(?:second|minute|hour|day|week|month|year)s?(?:\s+\d+\s+(?:second|minute|hour|day|week|month|year)s?)*$
```
See the [regex demo](https://regex101.com/r/htldlR/2)
*Details*
* `^` - start of string
* `(?!.*(second|minute|hour|day|week|month|year).*\1)` - no `second`, `minute`, `hour` day`,` week`,` month`or`year` string repetition allowed in the whole string
* `\d+\s+(?:second|minute|hour|day|week|month|year)s?` - 1 or more digits, one or more whitespaces, then either `second`, `minute`, `hour`, `day`, `week`, `month` or `year`, and then an optional `s` letter
* `(?:\s+\d+\s+(?:second|minute|hour|day|week|month|year)s?)*` - zero or more repetition of one or more whitespaces followed with the pattern described above
* `$` - end of string. |
24284 | Is there a more efficient way to convert byte array to int16 array ?? or is there a way to use Buffer.BlockCopy to copy evry two byte to int16 array ???
```
public static int[] BYTarrToINT16arr(string fileName)
{
try
{
int bYte = 2;
byte[] buf = File.ReadAllBytes(fileName);
int bufPos = 0;
int[] data = new int[buf.Length/2];
byte[] bt = new byte[bYte];
for (int i = 0; i < buf.Length/2; i++)
{
Array.Copy(buf, bufPos, bt, 0, bYte);
bufPos += bYte;
Array.Reverse(bt);
data[i] = BitConverter.ToInt16(bt, 0);
}
return data;
}
catch
{
return null;
}
}
``` | Use a `FileStream` and a `BinaryReader`. Something like this:
```
var int16List = List<Int16>();
using (var stream = new FileStream(filename, FileMode.Open))
using (var reader = new BinaryReader(stream))
{
try
{
while (true)
int16List.Add(reader.ReadInt16());
}
catch (EndOfStreamException ex)
{
// We've read the whole file
}
}
return int16List.ToArray();
```
You can also read the whole file into a `byte[]`, and then use a `MemoryStream` instead of the `FileStream` if you want.
If you do this then you'll also be able to size the `List` approrpriately up front and make it a bit more efficient. |
24383 | I'm new here and I saw one can buy them e.g. in Germany at bitcoin.de via an online transaction, by paying Euros. Is the same possible in Romania (do you need a €-bank account for that?) and if so, is there some (online) exchange service best suited for that? | There are no exchanges yet that accept RON (Romanian Leu).
There are individuals in Romania who trade bitcoins for face-to-face cash trade:
* <https://localbitcoins.com/country/RO> <-- But be careful of the listings who are not local, wanting to trade online instead.
Also there is the #Bitcoin-otc marketplace and the #bitcoin-otc-eu IRC channel in which you might find someone willing to trade.
If you have access to purchase UKash vouchers with cash, there are several services such as Mercabit.eu, VirWoX, and Bitcoin Nordic who will accept that voucher for the purchase of Bitcoins.
Otherwise, if you have the ability to send a SEPA transfer, there are many exchanges you can use, include BITSTAMP:
* <http://en.bitcoin.it/wiki/Buying_bitcoins> |
24549 | I'm trying to compile a C code that contains the following lines:
```
//...
#ifdef CPU_GPREG_NAMES
static const char *reg_names[] = CPU_GPREG_NAMES;
#endif
//...
```
where the macro `CPU_GPREG_NAMES` is defined as:
```
#define CPU_GPREG_NAMES \
"zero", "at", "v0", "v1", "a0", "a1", "a2", "a3", \
"t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7", \
"s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7", \
"t8", "t9", "k0", "k1", "gp", "sp", "fp", "ra" \
```
However there seems to be a problem with the assignment in the first code snippet above, since I'm getting the following error on that specific line:
```
error: invalid initializer
error: expected identifier or '(' before string constant
```
**What is the cause of this error?** | You forget to put curly braces:
```
#define CPU_GPREG_NAMES { \
"zero", "at", "v0", "v1", "a0", "a1", "a2", "a3", \
"t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7", \
"s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7", \
"t8", "t9", "k0", "k1", "gp", "sp", "fp", "ra" \
}
```
or
```
static const char *reg_names[] = {CPU_GPREG_NAMES};
``` |
25284 | ```
$ perl --version
This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi
$ echo -e "foo\nbar" > baz.txt
$ perl -p -e 's/foo\nbar/FOO\nBAR/m' baz.txt
foo
bar
```
How can I get this replacement to work? | You can use the `-0` switch to change the input separator:
```
perl -0777pe 's/foo\nbar/FOO\nBAR/' baz.txt
```
`-0777` sets the separator to `undef`, `-0` alone sets it to `\0` which might work for text files not containing the null byte.
Note that `/m` is needless as the regex does not contain `^` nor `$`. |
25405 | I started gathering some information about Iceland reading Travel.SE (what else?) and I was quite surprised I didn't find a single question about moving around Iceland using public transport, a car seems mandatory.
How widespread is the public transportation system net in July? Would it be viable to rely upon it to move around the Ring road and the Golden circle? Is there a bus (or a train?) which goes in the central area of Iceland? | It definitely [looks like](http://www.thrillophilia.com/blog/camps-of-india-50-best-one/) almost anyone can set a camp almost anywhere in India.
If I were a non-local camping in India, I would definitely swing by the local police station and inquire if it was safe to stay put where I wanted to camp. This way, I'd know a)if the area is safe or not b) at least some authority is informed of my whereabouts. If there are women in the group, additional police patrolling can also be requested.
From what I know, one is as free to roam in India. |
25687 | There I have a button that is hidden from the user but want it to be clicked by default like with a check box if you want it to be checked by default you add the checked attribute is there any way you could do the same thing with a button here is my code
```
<input id="submit" type="hidden" value="Reverse Geocode" autofocus>
``` | You can do as following:
```
<script type="text/javascript">
document.getElementById("submit").click();
</script>
``` |
25696 | I have an android application A and android application B.
I want to call application B from application A and get some data from application B and return back to same application A which was opened previously.
Application A
```
Intent app = new Intent(Intent.CATEGORY_LAUNCHER);
app.setComponent(new ComponentName("com.abc.applicationA","com.abc.applicationA.MainActivity"));
startActivity(applicationA);
```
Application B
```
Intent j = new Intent(Intent.CATEGORY_LAUNCHER);
j.setComponent(new ComponentName("com.abc.applicationB","com.abc.applicationB.MainActivity"));
j.putStringArrayListExtra("Data", data); //data is array list
startActivity(j);
```
But for the above implementation applicationA launches applicationB then again when applicationB launches applicationA once more time the applciationA is launched.so when i go back i will have applicationA which is called second time,then applicationB and applicationA called 1st time.
My intention is applicationA is launched and then go to applicationB get some data and come back to same applicationA launcehed before. | When you set your blog page via Settings -> Reading, WordPress is skipping the template you choose for that page and will load index.php if there isn't a specific category template (see more about template hierarchy: <https://developer.wordpress.org/themes/basics/template-hierarchy/>).
So if you want your index-blog.php to be the template for your blog, you have to leave the 'Choose Posts Page' blank and only set your Blog page (Pages -> Blog) to use the blog template. |
25923 | I am trying to write a C++ program that takes the following inputs from the user to construct rectangles (between 2 and 5): height, width, x-pos, y-pos. All of these rectangles will exist parallel to the x and the y axis, that is all of their edges will have slopes of 0 or infinity.
I've tried to implement what is mentioned in [this](https://stackoverflow.com/questions/115426) question but I am not having very much luck.
My current implementation does the following:
```
// Gets all the vertices for Rectangle 1 and stores them in an array -> arrRect1
// point 1 x: arrRect1[0], point 1 y: arrRect1[1] and so on...
// Gets all the vertices for Rectangle 2 and stores them in an array -> arrRect2
// rotated edge of point a, rect 1
int rot_x, rot_y;
rot_x = -arrRect1[3];
rot_y = arrRect1[2];
// point on rotated edge
int pnt_x, pnt_y;
pnt_x = arrRect1[2];
pnt_y = arrRect1[3];
// test point, a from rect 2
int tst_x, tst_y;
tst_x = arrRect2[0];
tst_y = arrRect2[1];
int value;
value = (rot_x * (tst_x - pnt_x)) + (rot_y * (tst_y - pnt_y));
cout << "Value: " << value;
```
However I'm not quite sure if (a) I've implemented the algorithm I linked to correctly, or if I did exactly how to interpret this?
Any suggestions? | It is easier to check if a rectangle is completly outside the other, so if it is either
on the left...
```
(r1.x + r1.width < r2.x)
```
or on the right...
```
(r1.x > r2.x + r2.width)
```
or on top...
```
(r1.y + r1.height < r2.y)
```
or on the bottom...
```
(r1.y > r2.y + r2.height)
```
of the second rectangle, it cannot possibly collide with it. So to have a function that returns a Boolean saying weather the rectangles collide, we simply combine the conditions by logical ORs and negate the result:
```
function checkOverlap(r1, r2) : Boolean
{
return !(r1.x + r1.width < r2.x || r1.y + r1.height < r2.y || r1.x > r2.x + r2.width || r1.y > r2.y + r2.height);
}
```
To already receive a positive result when touching only, we can change the "<" and ">" by "<=" and ">=". |
26140 | How can I set the `innerHTML`, or the whole content of an HTML document using javascript?
For example my document would look like this:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="Content-language" content="en"/>
<title>Webpage Generator</title>
<script type="text/javascript">
var newDocument = "<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" \n\t"http://www.w3.org/TR/html4/loose.dtd">\n<html>\n<head>\n\t<title>Greetings!</title>\n</head>\n<body>\n\t<p>Howdy!</p>\n</body>\n</html>";
document.innerHTML = newDocument;
</script>
</head>
<body>
</body>
</html>
```
But the browser would load the following HTML:
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Greetings!</title>
</head>
<body>
<p>Howdy!</p>
</body>
</html>
``` | If you don't want to use innerHTML you could use `document.write(newDocument);`.
If the document hasn't completely loaded, you'll need to put `document.open()` as well (thanks bažmegakapa). |
27293 | When i try to add the image in the URL to a PDF file the image comes completley black.
But when I click the download pdf button again the image gets added to the PDF.Only
when I do it the first time, the image comes as black.
```
function getBase64Image(url) {
alert(url);
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
var img = new Image();
img.src = url;
img.style.height ="181px";
img.style.width ="183px";
//img.crossOrigin ="Anonymous";
context.drawImage(img,0,0);
var dataURL = canvas.toDataURL("image/jpeg");
alert(dataURL);
document.body.appendChild(img);
var doc = new jsPDF('landscape');
doc.addImage(img,'JPEG',0,0,50,50);
doc.save('Saved.pdf');
}
getBase64Image("http://localhost:64931/jspdf/download.png");
``` | What happens when you change your code like this:
A changed JPEG to PNG, that worked for me.
```
function getBase64Image(url) {
alert(url);
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
var img = new Image();
img.src = url;
img.style.height ="181px";
img.style.width ="183px";
//img.crossOrigin ="Anonymous";
context.drawImage(img,0,0);
var dataURL = canvas.toDataURL("image/png");
alert(dataURL);
document.body.appendChild(img);
var doc = new jsPDF('landscape');
doc.addImage(img,'PNG',0,0,50,50);
doc.save('Saved.pdf');
}
getBase64Image("http://localhost:64931/jspdf/download.png");
``` |
27725 | I have a RecyclerView and I need some `if statement` in my Adapter to make RecyclerView skip and do not create an item if one field in my ViewModel is empty. For example I have ViewModel with title and picture in it, so if the title is empty - do not create an item for it. So:
```
if (TextUtils.isEmpty(viewModel.getMessages().getTitle())) {
//do something?
}
```
This should be easy enough, but I just started practicing RecyclerView :) | If you need to remove item use
```
list.remove(position)
adapter.notifyItemRemoved(position)
``` |