text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
How to automate an installation of Phusion Passenger and Nginx?
When the command:
./passenger-install-nginx-module
is run, it asks a bunch of questions when logged in to the server.
The aim is to automate this process, how can this be done if it requires specific answers during the installation?
A:
Depending on the version of Phusion Passenger, it should ether be possible to do
yes | passenger-install-nginx-module
(for version 2.0.x) or
passenger-install-nginx-module --auto
for versions greater than 2.1.
| {
"pile_set_name": "StackExchange"
} |
Q:
VS2010 repeatedly showing animated icon
Visual Studio 2010 is repeatedly showing this animated icon in the bottom right next to the "Ln" every 2 seconds. And even if VS is minimized, it makes my cursor briefly show an hourglass every time. Can someone tell me what this icon stands for?
Edit:
Showing the icon during a build also. Build icon is on left, mystery icon in middle.
A:
The icon seems to be related to Intellisense. I noticed that vcpkgsrv.exe was starting and exiting quickly over and over. Although I had a large project.sdf file, Intellisense was not working in my project.
I found this KB from Microsoft related to VS2010 Intellisense issues on XP (I'm running XP). When I installed the hotfix, the problem went away.
http://support.microsoft.com/kb/2526044
| {
"pile_set_name": "StackExchange"
} |
Q:
Treat nested std::arrays as a single flat array with chained .data()
Let's say I have this little fixed-dimension matrix class:
template<size_t M, size_t N>
struct MatMN {
std::array<std::array<double, N>, M> rows;
double* begin() { return rows.data()->data(); } //The scary part
double* end() { return begin() + M*N; }
//const iterators, etc.
};
and instead of using nested loops, I implement scalar multiplication (also equality testing, binary de/serialization, etc.) like so:
template<size_t M, size_t N>
MatMN<M, N> operator*(double scalar, MatMN<M, N> mat) {
for (double& x_ : mat) { x_ *= scalar; }
return mat;
}
Is it actually okay to treat nested std::arrays as a single flat C-style array by using .data()->data()?
Am I at risk of some strict-aliasing issue? Or maybe unexpected struct padding at the end of individual std::arrays (i.e. between matrix rows)? So far it's worked fine for me (with GCC), but I know that doesn't mean much in C++.
A:
Is it actually okay to treat nested std::arrays as a single flat C-style array by using .data()->data()?
No. std::array is allowed to have padding at the end. That means that there could be a gap between where one array ends and the other begins in the nested structure. getting a pointer like you do would (if the padding is there) will cause you to access that padding giving you undefined results.
Instead of storing the matrix in a 2d std::array you should just use a 1d std::array. That way you can guarantee the elements are all next to each other in memory.
| {
"pile_set_name": "StackExchange"
} |
Q:
Firefox cli save page
It's possible to save page page with Firefox CLI?
Something like:
firefox -new-tab http://google.com -save-page /path/
A:
I'm not aware of any really simple way to do this. You might consider looking into a browser automation tool like Selenium. Alternatively, a more general automation tool like Sikuli might be workable as well (this is actually likely to be easier than using Selenium, depending on exactly what you want to do).
| {
"pile_set_name": "StackExchange"
} |
Q:
C# Large JSON to string causes out of memory exception
I'm trying to download very large JSON file. However, I keep getting an error message:
"An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll"
{The function evaluation was disabled because of an out of memory exception.}
Any tips how I can download this large JSON filet? I have tried to use string and StringBuilder but no luck.
Here is my code:
public static string DownloadJSON(string url)
{
try
{
String json = new WebClient().DownloadString(url); // This part fails!
return json;
}
catch (Exception)
{
throw;
}
}
I have created console application. I have tried this code with smaller JSON file and it worked.
My idea is later to split this larger JSON file and put it to database. However I need to encode it before I can put it to database. I have not write yet database part or anything else, because downloading this big JSON causes problems. I don't need it as a stream, but that was my example way how I made encoding. I need to encode it because data have special characters like å.
I tried also this but same problem:
var http = (HttpWebRequest)WebRequest.Create(url);
var response = http.GetResponse();
var stream = response.GetResponseStream();
var sr = new StreamReader(stream);
var content = sr.ReadToEnd();
A:
I assume that you have very very large response. It is better to process stream. Now comes to point that cause outofmemoryexcetion.
In .net max size of any object 2GB. This is even for 64 bit machine. If your machine is 32 bit then this limit is very low.
In your case above rules get break so it will not work but if you have file size less than that then try to build your code against 64 bit and it will give your result.
| {
"pile_set_name": "StackExchange"
} |
Q:
air trapped in sink drain?
A modest sized object had fallen into my bathroom sink causing (I assume) the sink to drain slowly. That is, at first (after dropping the item) the sink seemed to drain fine but eventually it slowed to a bare minimum.
The sink is a pedestal sink and the drain pipes are pvc with threaded and compression joints, so I took out all the pipes from basin to floor, clean everything (obstruction and all) and put it back together. Everything was hand-tightened only: the only tool used was a toothbrush to scrub the parts clean.
Now the water drains better but only slightly. If there is still an obstruction, it would have to be in the floor, but we had no apparent problem before the one item I was able to extract. On the other hand, now, as the water slowly drains, I can hear a trickle in the drain pipes. As far as I can recall, this was not the case before I took it all apart. So I'm guessing that there is air bubble somewhere. Does that sound reasonable? And if so, what can I do to deal with it.
Note that this is not a new works project; I simply disassembled, clean and reassembled everything. I didn't add parts or have any parts left over. At worst, I might have changed the order of parts or changed the spans between adjustable, compression joints. But all the fittings and seals were put in place.
A:
There is probably soap and hair plugging the drain lower down. a small hand auger might be the best bet to clean it out. Chemical drain cleaners partially open the drain then they plug up again much quicker and you end up spending more on chemicals than the cost of the auger.
| {
"pile_set_name": "StackExchange"
} |
Q:
Searched / Tried a lot: Unable to add popup display delay
[Note: I am not programming expert.]
I tried a lot and searched this platform but did not find any solution.
I need to add 30sec delay in displaying a popup box. I took script from here. The script has fade-out time and it is working fine.
Check script below...i replaced 'close()' with 'delay()'. it is not working.
<script type="text/javascript">
$(document).ready(function() {
if($.cookie('the_cookie') != 1) { // If the_cookie not set to 1 then initializes and play calling the popup
$.cookie('the_cookie', '1', { expires: 1 }); // Value day (s) before expiration of the cookie
$.fancybox(
$("#popup").html(),
{
type : 'iframe',
href : '/contact.php', // url vers notre page html qui sera charg?e dans la popup en mode iframe
maxWidth : 415,
maxHeight : 475,
fitToView : false,
width : '90%',
height : '95%',
autoSize : false
}
);setTimeout(function(){ $.fancybox.delay(30000) },10000);
}
});
A:
try below code,
$(document).ready(function() {
if($.cookie('the_cookie') != 1) { // Si the_cookie n'a pas pour valeur 1 alors on l'initialise et on joue l'appel de la popup
$.cookie('the_cookie', '1', { expires: 1 }); // valeur en jour avant expiration du cookie
//below 3000 is 3 sec delay then popup appears.
setTimeout(fire, 3000);
//below 5000 is 5 sec dely after that popup closes.
setTimeout("parent.$.fancybox.close()", 5000);
}
});
function fire() {
$.fancybox(
$("#popup").html(),
{
type : 'iframe',
href : 'http://www.site-web-creation.net/source/pub.html', // url vers notre page html qui sera chargée dans la popup en mode iframe
maxWidth : 800,
maxHeight : 300,
fitToView : false,
width : '70%',
height : '70%',
autoSize : false
}
);
}
i hope this helps to clear your problem. change according your need for delays. i just set timer to fire an event in this case popup box using function fire.
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL wrong results with GROUP BY and ORDER BY
I have a table user_comission_configuration_history and I need to select the last Comissions configuration from a user_id.
Tuples:
I'm trying with many queries, but, the results are wrong. My last SQL:
SELECT *
FROM(
SELECT * FROM user_comission_configuration_history
ORDER BY on_date DESC
) AS ordered_history
WHERE user_id = 408002
GROUP BY comission_id
The result of above query is:
But, the correct result is:
id user_id comission_id value type on_date
24 408002 12 0,01 PERCENTUAL 2014-07-23 10:45:42
23 408002 4 0,03 CURRENCY 2014-07-23 10:45:41
21 408002 6 0,015 PERCENTUAL 2014-07-23 10:45:18
What is wrong in my SQL?
A:
This is your query:
SELECT *
FROM (SELECT *
FROM user_comission_configuration_history
ORDER BY on_date DESC
) AS ordered_history
WHERE user_id = 408002
GROUP BY comission_id;
One major problem with your query is that it uses a MySQL extension to group by that MySQL explicitly warns against. The extension is the use of other columns in the in theselect that are not in the group by or in aggregation functions. The warning (here) is:
MySQL extends the use of GROUP BY so that the select list can refer to
nonaggregated columns not named in the GROUP BY clause. This means
that the preceding query is legal in MySQL. You can use this feature
to get better performance by avoiding unnecessary column sorting and
grouping. However, this is useful primarily when all values in each
nonaggregated column not named in the GROUP BY are the same for each
group. The server is free to choose any value from each group, so
unless they are the same, the values chosen are indeterminate.
So, the values returned in the columns are indeterminate.
Here is a pretty efficient way to get what you want (with "comission" spelled correctly in English):
SELECT *
FROM user_commission_configuration_history cch
WHERE NOT EXISTS (select 1
from user_commission_configuration_history cch2
where cch2.user_id = cch.user_id and
cch2.commission_id = cch.commission_id and
cch2.on_date > cch.on_date
) AND
cch.user_id = 408002;
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get File Navigator View in MacOS App
I have seen in many apps a simple tree navigator view, like this in Xcode:
I am creating my own IDE and would like to know if there is there a view for this?
A:
As @TheNextman said, I need NSOutlineView, which was perfect. I followed this tutorial:
https://www.raywenderlich.com/1201-nsoutlineview-on-macos-tutorial
| {
"pile_set_name": "StackExchange"
} |
Q:
When did we learn that stars die?
As we all know, the stars we see in the night sky might already be dead. I was wondering though, when was this fact or conclusion commonly established? Today, most people (let's assume with an above average education) would probably be aware of this fact.
When is the earliest time when the same could be said? I am particularly interested if the same could be said for the time period revolving around the period 1850 - 1900.
I know that the speed of light was approximated fairly accurately in the 17th century. Knowing this (finite) speed, it's not hard for me to draw the conclusion that the source of the light I see may not be there anymore. Would this be an easy conclusion to draw a hundred years ago however? Maybe they thought stars don't die?
A:
Super novae were known a long time ago. But they were not understood as a the death throes of a star.
In spite of the apparent immutability of the heavens, Chinese
astronomers were aware that new stars could appear. In 185 AD,
they were the first to observe and write about a supernova, now known
as the SN 185. The brightest stellar event in recorded history was
the SN 1006 supernova, which was observed in 1006 and written about by
the Egyptian astronomer Ali ibn Ridwan and several Chinese
astronomers. The SN 1054 supernova, which gave birth to the Crab
Nebula, was also observed by Chinese and Islamic astronomers.
But it wasn't until later we understood they had a life-cycle. The Greek philosopher Aristotle even proposed that the stars were made of a special element, not found on Earth, that never changes.
The Chinese might have been the first to suggest the idea as they took careful note of "guest stars" which suddenly appeared among the fixed stars.
It would seem that another person who suggested the idea was probably Tycho Brahe (14 December 1546 – 24 October 1601) as he coined the term nova meaning new star. And likely with this new mind set births brings deaths. He also is famous for realizing stars are very far away (due to parallax). In 1572 he witnessed a super nova and in 1573 he published a small book, "De nova stella" (The New Star) based on the super nova he saw. (Most super novae were assumed to be new stars, not dieing stars).
The event of understanding stars die probably just fell out of understanding what stars are. I'm not sure you can point to one event or person in history that could prove to know stars die prior to understanding stars themselves.
A:
Conservation of energy dates back to ca. 1840. Once that was established, it was natural to suspect that a star had a finite lifetime, which could be calculated if its energy source was understood. The most popular theory in the 19th century was that stars converted gravitational energy into heat. Lord Kelvin also hypothesized that meteors crashed into the sun and resupplied it with energy. All of these mechanisms gave lifetimes for stars that were relatively short, and, e.g., too short to be consistent with the kinds of terrestrial timescales being proposed by Darwin and the geological gradualists. More info here: http://www.nobelprize.org/nobel_prizes/themes/physics/fusion/sun_1.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Condition expression - how to get REPORT TYPE in condition Expression
I would like to write expression like this:
{REPORT_TYPE} == "csv" ? "'" + $F{NUMBER_VALUE} : $F{NUMBER_VALUE}
where {REPORT_TYPE} should be xls, csv etc.
Have you any idea how to get report type?
A:
You need to send a parameter from your server which will get these type of format like csv,xls. If this parameter has some value then you can use this expression. e.g. you have a parameter named reportType then you can make a syntax like this.
$P{reportType} ? "'" + $F{NUMBER_VALUE} : $F{NUMBER_VALUE}
if you want report type like csv then you need to give a value to this parameter, otherwise send it as blank string.
If you still get problem let me know.
| {
"pile_set_name": "StackExchange"
} |
Q:
What are bounded-treewidth circuits good for?
One can talk of the treewidth of a Boolean circuit, defining it as the treewidth of the "moralized" graph on wires (vertices) obtained as follows: connect wires $a$ and $b$ whenever $b$ is the output of a gate having $a$ as input (or vice-versa); connect wires $a$ and $b$ whenever they are used as inputs to the same gate. Edit: one can equivalently define the treewidth of the circuit as that of the graph representing it; if we use associativity to rewite all AND and OR gates to have fan-in at most two, the treewidth according to either definition is the same up to a factor $3$.
There is at least one problem that is known to be untractable in general but tractable on Boolean circuits of bounded treewidth: given a probability for each of the input wires to be set to 0 or 1 (independently from the others), compute the probability that a certain output gate is 0 or 1. This is generally #P-hard by a reduction from e.g. #2SAT, but it can be solved in PTIME on circuits whose treewidth is assumed to be less than a constant, using the junction tree algorithm.
My question is to know whether there are other problems, beyond probabilistic computation, that are known to be intractable in general but tractable for bounded-treewidth circuts, or whose complexity can be described as a function of the circuit size and also of its treewidth. My question is not specific to the Boolean case; I am also interested in arithmetic circuits over other semirings. Do you see any such problems?
A:
We now understand that for any fixed bound $k \in \mathbb{N}$ on the treewidth, we can convert any Boolean circuit of treewidth less than $k$ to a so-called d-SDNNF circuit, in linear time and with the dependency on $k$ being singly exponential.
The so-called d-SDNNFs are circuits satisfying conditions on the use of negation (only at the leaves), determinism (the inputs to OR-gates are mutually exclusive), decomposability (the inputs to AND-gates depend on disjoint sets of variables), and stucturedness (the AND-gates split the variables in some fixed way throughout the circuit, as described by a v-tree). This class has been studied in knowledge compilation and is known to enjoy tractable SAT and tractable model counting (recapturing probabilistic evaluation and counting), but other problems have been studied for this class such as enumeration, quantification, etc.
So one way to use bounds on the treewidth of a circuit is to convert it to this d-SDNNF class which has more explicit properties in terms of the circuit semantics, and for which there are several known results on the tractability of various tasks.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django Rest Framework - define API from an existing view
I'm writing my django app, and i have a lot of views that already returns a JSONResponse object, for example:
def power_on_relay(request):
'''View that Power on one of the available relays'''
try:
relay_id = get_or_raise(request, 'relay_id')
GPIO.setmode(GPIO.BOARD)
GPIO.setup(relay_id, GPIO.OUT)
GPIO.output(relay_id, True)
pin_status = GPIO.input(relay_id)
return JsonResponse({'success': True, 'message': 'Relay {0} was powered on'.format(relay_id), 'data': None})
except Exception as ex:
return JsonResponse({'success': False, 'message': str(ex), 'data': ''})
Now, i need to expose some of these views as "API" and i need to manage the authentication, throttling, etc...
So, i was wondering if it's possible using DRF and without writing tons of redundant code.
I mean, there is a short way to do that? something like a decorator that doesn't change my web application behaivor?
Any suggestions?
A:
You will need to use api_view decorator
from rest_framework.decorators import api_view
from rest_framework.response import Response
@api_view(['GET'])
def power_on_relay(request):
'''View that Power on one of the available relays'''
try:
relay_id = get_or_raise(request, 'relay_id')
GPIO.setmode(GPIO.BOARD)
GPIO.setup(relay_id, GPIO.OUT)
GPIO.output(relay_id, True)
pin_status = GPIO.input(relay_id)
return Response({'success': True, 'message': 'Relay {0} was powered on'.format(relay_id), 'data': None})
except Exception as ex:
return Response({'success': False, 'message': str(ex), 'data': ''})
| {
"pile_set_name": "StackExchange"
} |
Q:
How to randomly select items in a set
For my homework assignment I am supposed to randomly select items in a list. so far I have this code,
import random
room = range(0, 365)
r = random.choice(room)
mySet = set(r)
However, when I attempt to run the program, it says that " 'int' is no iterable".
I was wondering how I can fix this problem?
A:
set() requires an iterable (a list or tuple are iterables) as its argument, where you’ve supplied an integer.
A:
If you want to choose random items (with fixed size) from a set:
list = random.sample(your_set, size);
or if you want to choose random items with random size:
size = random.randint(0, your_set_size):
list = random.sample(your_set, size):
| {
"pile_set_name": "StackExchange"
} |
Q:
NativeScript - Convert HTML string to HTML
I use NativeScript to develop a mobile application. I have a ListView with Label and I want convert text with HTML to HTML.
I try to use [innerHTML] but it's doesn't works.
A:
You can try the HTMLView Control. Check the details here in the below documentation.
https://docs.nativescript.org/ui/ns-ui-widgets/html-view
| {
"pile_set_name": "StackExchange"
} |
Q:
Como puedo desahabilitar una llave foranea en una tabla para hacer un truncate en slq server
quiero saber si hay una forma de como desahabilitar una llave foranea, para poder eliminar los registros de una tabla, pero cuando hago utilizo este script: alter table dbo.AMBITO NOCHECK constraint ALL --Desactivar, me sigue sin poder eliminar los registros de una tabla, lo necesito hacer sin un procedimiento almacenado
A:
puedes probar quitando los datos de la tabla a la cual hace referencia la foranea y luego ejecutas el truncate a la tabla que necesitas
| {
"pile_set_name": "StackExchange"
} |
Q:
RandomForest IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
I am working with sklearn on RandomForestClassifier:
class RandomForest(RandomForestClassifier):
def fit(self, x, y):
self.unique_train_y, y_classes = transform_y_vectors_in_classes(y)
return RandomForestClassifier.fit(self, x, y_classes)
def predict(self, x):
y_classes = RandomForestClassifier.predict(self, x)
predictions = transform_classes_in_y_vectors(y_classes, self.unique_train_y)
return predictions
def transform_classes_in_y_vectors(y_classes, unique_train_y):
cyr = [unique_train_y[predicted_index] for predicted_index in y_classes]
predictions = np.array(float(cyr))
return predictions
I got this Error message:
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
A:
It seems that y_classes holds values that are not valid indices.
When you try to get access into unique_train_y with predicted_index than you get the exception as predicted_index is not what you think it is.
Try to execute the following code:
cyr = [unique_train_y[predicted_index] for predicted_index in range(len(y_classes))]
# assuming unique_train_y is a list and predicted_index should be integer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Allocating generic class in C++
I'm trying to write a Testbench that can test different implementations of an Interface. Coming from Java I would love to just specify an Interface, create some Classes that implement it and write a class Testbench<T extends MyInterface>
Transforming this approach to C++ gives something like:
// The Interface
class Animal {
public:
Animal(int age) {};
virtual ~Animal() {};
virtual void Say();
};
// An Implementor
class Cow : Animal {
private:
int age;
public:
Cow(int age) : Animal(age) {
this.age = age;
};
void Say() {
std::cout << "I'm an " << age << " year old Cow" << std::endl;
}
}
Next I define the templated class that can test different Animals:
template<> void AnimalTestbench<class T> {
static void TestSay();
}
But when I try to implement the TestSay method, it gives me "Allocation of incomplete type T"
template<> void AnimalTestbench<class T>::TestSay() {
T *animal = new T(5);
animal->Say();
}
Of course I didn't specify that T should be an Animal, this is my first question. The latter is: why does this code fail?
I've heard that Templates are a fancy way for Macros that know about types, but if it's a Macro in a way, then the compiler should first replace T with my (complete) Type which he should be able to allocate and instantiate.
A:
There are a number of issues with your code:
The Animal class should declare Say as pure virtual
The Animal class uses this. instead of this->
The Cow class does not derive publicly from Animal
The AnimalTestbench class does not use templates correctly, template<> defines a specialization, which is not what you want
T *animal = new T(5); is a memory leak, because a delete doesn't follow.
we don't need to allocate at all, actually
Fixed Animal class:
class Animal {
public:
Animal(int) {};
virtual ~Animal() {};
virtual void Say() = 0;
};
Fixed Cow class:
class Cow : public Animal {
private:
int age;
public:
Cow(int age) : Animal(age) {
this->age = age;
};
void Say() override{
std::cout << "I'm an " << age << " year old Cow" << std::endl;
}
};
Fixed AnimalTestbench (we don't need to separate the impl of TestSay from the declaration, but I'm following your approach:
template<class T>
struct AnimalTestbench
{
static void TestSay();
};
template<class T>
void AnimalTestbench<T>::TestSay() {
T animal(5);
Animal *animal_base = &animal;
animal_base->Say();
}
Usage:
int main()
{
AnimalTestbench<Cow>::TestSay();
}
TestSay could be a standalone templated function, but I presume there are other virtual functions you wish to test and it's convenient to put them all in a single test class.
Demo
| {
"pile_set_name": "StackExchange"
} |
Q:
mavenでビルドしたapache-tikaのサンプルコードがエラーになる
javaとmavenの超初心者です。
Mavenを利用してapache-tikaの簡単なサンプルコードを実行させようとしましたが
以下のようなエラーとなりました
> java -jar target\tika-app-1.0-SNAPSHOT.jar
エラー: メイン・クラスne.katch.Appを初期化できません
原因: java.lang.NoClassDefFoundError: org/apache/tika/exception/TikaException
どこかで初歩的なミスを犯しているのだとと思います。ご指摘をいただけたらと思います。
環境
C:\>Users\yasu_>mvn -v
Apache Maven 3.6.3 (cecedd中略883f)
Maven home: C:\maven\bin\..
Java version: 13.0.1, vendor: Oracle Corporation, runtime: C:\jdk-13.0.1
Default locale: ja_JP, platform encoding: MS932
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
Mavenプロジェクトの生成
C:\>Users\yasu_>mvn archetype:generate
(後略)
C:\>Users\yasu_>mvn validate
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------------< ne.katch:tika-app >--------------------------
[INFO] Building tika-app 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.156 s
[INFO] Finished at: 2020-05-09T18:10:01+09:00
[INFO] ------------------------------------------------------------------------
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ne.katch</groupId>
<artifactId>tika-app</artifactId>
<version>1.0-SNAPSHOT</version>
<name>tika-app</name>
<!-- FIXME change it to the project's website -->
<url>http://www.example.com</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.tika</groupId>
<artifactId>tika-core</artifactId>
<version>1.24.1</version>
</dependency>
<dependency>
<groupId>org.apache.tika</groupId>
<artifactId>tika-parsers</artifactId>
<version>1.24.1</version>
</dependency>
</dependencies>
<build>
<pluginManagement><!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) -->
<plugins>
<!-- clean lifecycle, see https://maven.apache.org/ref/current/maven-core/lifecycles.html#clean_Lifecycle -->
<plugin>
<artifactId>maven-clean-plugin</artifactId>
<version>3.1.0</version>
</plugin>
<!-- default lifecycle, jar packaging: see https://maven.apache.org/ref/current/maven-core/default-bindings.html#Plugin_bindings_for_jar_packaging -->
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.1</version>
</plugin>
<plugin>
<artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version>
</plugin>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
</plugin>
<!-- site lifecycle, see https://maven.apache.org/ref/current/maven-core/lifecycles.html#site_Lifecycle -->
<plugin>
<artifactId>maven-site-plugin</artifactId>
<version>3.7.1</version>
</plugin>
<plugin>
<artifactId>maven-project-info-reports-plugin</artifactId>
<version>3.0.0</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<mainClass>ne.katch.App</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
サンプルコード
package ne.katch;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.StringWriter;
import org.apache.tika.Tika;
import org.apache.tika.exception.TikaException;
public class App
{
public static void main( String[] args )
{
try {
Tika tika = new Tika();
System.out.println(tika.parseToString(new File("C:\\sample.pdf")));
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
}
}
}
mavenでのビルド
c:\Users\yasu_\tika-app>mvn package
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------------< ne.katch:tika-app >--------------------------
[INFO] Building tika-app 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:3.0.2:resources (default-resources) @ tika-app ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory c:\Users\yasu_\tika-app\src\main\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ tika-app ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to c:\Users\yasu_\tika-app\target\classes
[INFO]
[INFO] --- maven-resources-plugin:3.0.2:testResources (default-testResources) @ tika-app ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory c:\Users\yasu_\tika-app\src\test\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ tika-app ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to c:\Users\yasu_\tika-app\target\test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.22.1:test (default-test) @ tika-app ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running ne.katch.AppTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 s - in ne.katch.AppTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ tika-app ---
[INFO] Building jar: c:\Users\yasu_\tika-app\target\tika-app-1.0-SNAPSHOT.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.051 s
[INFO] Finished at: 2020-05-09T18:27:04+09:00
[INFO] ------------------------------------------------------------------------
jarファイルの実行
c:\Users\yasu_\tika-app>java -jar target\tika-app-1.0-SNAPSHOT.jar
エラー: メイン・クラスne.katch.Appを初期化できません
原因: java.lang.NoClassDefFoundError: org/apache/tika/exception/TikaException
A:
依存ライブラリーも含めて実行可能jarをつくりたいということであれば、Maven Assembly Pluginなどを使う必要がありますね。
このページがなんかが分かりやすいですかね。
https://www.shookuro.com/entry/2018/03/03/172556
【参考】Apache Maven Assembly Plugin
https://maven.apache.org/plugins/maven-assembly-plugin/index.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Find a hypergraph such that $|e|$ even, $|e\cap f|$ odd, and $|E|>|V|$
Here is a problem I have been working on (it comes from the standard "odd-town" problem. The idea is to show that the analogy for "even-town" doesn't work).
Find a hypergraph such that the edges have even size, the intersection of any two edges has odd size, and there are strictly more edges then vertices.
Some things to note: any two edges must intersect, since empty intersection is even.
In any intersecting linear space, $|E|=|V|$. Since our graph is intersecting, it must NOT be a linear space. That is, at least two vertices are not connected by any edge.
The Fisher inequality says that if the intersection of any two edges is $\lambda>0$, then $|E|\le |V|$. Thus, there must be more than one possible "intersection size."
From trying cases, I believe you need at least five vertices to find such a hypergraph. Any suggestions would be much appreciated.
A:
It turns out no such counterexample can exist.
Look at theorem $2$ in this pdf
| {
"pile_set_name": "StackExchange"
} |
Q:
argument missing && || operators in parenthesis but still working
I have come across this piece of code where the if statement contains an argument without && and/or || operators.
if (event.target.scrollTop > 0 !== isViewScrolled) {
//do something
}
How is it possible that this works? What is the logic contained in the parentheses?
A:
(event.target.scrollTop > 0 returns a bool, so javascript just checks if this bool is equal to isViewScrolled
A:
Check operator precedence https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Operator_Precedence
According to the above > (greater than) has higher precedence than != (inequality) so the
event.target.scrollTop > 0 !== isViewScrolled
is equivalent to
(event.target.scrollTop > 0) !== isViewScrolled
Although both are equivalent it's best to include parentheses where the order of evaluation is not clear.
| {
"pile_set_name": "StackExchange"
} |
Q:
find couple of objects from a dataframe
How can I avoid the two for loops and optimize my code to be able to handle big data?
import pandas as pd
import numpy as np
array = np.array([[1,'aaa','bbb'],[2,'ccc','bbb'],[3,'zzzz','bbb'],[4,'eee','zzzz'],[5,'ccc','bbb'],[6,'zzzz','bbb'],[7,'aaa','bbb']])
df= pd.DataFrame(array)
l=[]
for i in range(len(df)):
for j in range(i+1,len(df)):
if (df.loc[i][1] == df.loc[j][1]) & (df.loc[i][2] == df.loc[j][2]):
l.append((df.loc[i][0],df.loc[j][0]))
A:
Group by the second and third column. Then use the combination function: chain and combinations.
from itertools import combinations
list(chain(*df.groupby(by=[1, 2])[0].apply(lambda x: combinations(x, 2))))
[('1', '7'), ('2', '5'), ('3', '6')]
Change the dataset a bit.
array = np.array([[1,'aaa','bbb'],[2,'ccc','bbb'],[3,'zzzz','bbb'],
[4,'eee','zzzz'],[5,'ccc','bbb'],[6,'zzzz','bbb'],
[7,'aaa','bbb'], [8, "aaa", "bbb"], [9, 'zzzz','bbb']])
df = pd.DataFrame(array)
list(chain(*df.groupby(by=[1, 2])[0].apply(lambda x: combinations(x, 2))))
[('1', '7'),
('1', '8'),
('7', '8'),
('2', '5'),
('3', '6'),
('3', '9'),
('6', '9')]
list(chain(*df.groupby(by=[1, 2])[0].apply(lambda x: combinations(x, 2))))
1.67 ms ± 34.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
| {
"pile_set_name": "StackExchange"
} |
Q:
C program incrementing variable with for loop
I am trying to learn the C programming language on my own and have to depend on the internet for some help. I am playing around with one variable and a for loop; incrementing the variable by 1, each iteration of the loop. In this example, I am confused by the fact that the variable is not 1 in the first iteration of the loop. It's like the arguement was skipped on the first pass. I don't understand.
// This is a test of for loops
#include <stdio.h>
main () {
int a;
for (a = 0; a < 10; a++) {
printf("%d\n", a);
}
return 0;
}
A:
Maybe it's easiest to understand as follows. In C, a loop written like this:
for (a = 0; a < 10; a++) {
printf("%d\n", a);
}
is equivalent to this:
a=0;
while (a<10) {
printf("%d\n", a);
a++;
}
The for-loop notation is meant to collect up all of the loop control information at the top of the loop as written, but the parenthesized part after the keyword "for" is not executed as a group of statements before the body, it's treated as if it were written as shown in the while loop.
You can also write an infinite loop like this in C:
for (;;) {
printf("Hello forever\n");
}
which is equivalent to:
while (1) {
printf("Hello forever\n");
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How email sending works during user self registration in WSO2 IS?
I checked the axi2.xml file and output-event-adapter.xml file. For email OTP, it is mentioned to configure email in axis2.xml file https://docs.wso2.com/display/IS570/Configuring+Email+OTP.
But for user self-registration, it is asked to configure email in output-event adapter.xml file.
https://docs.wso2.com/display/IS570/Self-Registration+and+Account+Confirmation.
Why there are two places for email configuration? How sending email notification works in user self-registration in WSO2 IS 5.7.0?
Thanks in advance!
A:
WSO2IS contains an email sending module with WSO2IS which is based on Axis2. This handles the email notifications in Email OTP.[1,2] Those configurations are stored in axis2.xml.But for instances like Ask Password Account Confirmation and user self-registration. WSO2 Is uses email event adapters[3]. These adapters get the configuration from output-event-adapter.xml.
In the above image global adapter configs are defined in the output-event-adapters.xml. And each adapter created per tenant holds a connection with the configured smtp server. When a tenant needs to send an email it publish the content to the relevant stream[5]
This stream creates the mapping to the relevant publisher using the stream wso2 is resolves the publisher. These publishers are defined in
IS-HOME/repository/deployment/server/eventpublishers
these publishers specify the relevant adapter which has a connection with the SMTP server. It sends the email using that connection. This is how email sending handled in user self-registration. This has been further explained in[4].
As WSO2 IS has those two different mechanisms to handle notification you have to configure in two places for email OTP and account confirmation.As WSO2IS deprecating the Axis2 Based notification model.
if you enabled the property
<Parameter name="useEventHandlerBasedEmailSender">true</Parameter>
As per the documentation[6]. You can use the configurations in the output-event-adapter.xml for email otp. [7] But this supports after identity server 5.8.0.
1. https://github.com/wso2-extensions/identity-outbound-auth-email-otp/blob/f6ebf84f35d9da526077a0bfe220665e71baa7ec/component/authenticator/src/main/java/org/wso2/carbon/identity/authenticator/emailotp/EmailOTPAuthenticator.java#L1708
[2]. https://github.com/wso2/carbon-identity-framework/blob/34bb9053787020dbc901d17d7ee4290f075e6542/components/identity-mgt/org.wso2.carbon.identity.mgt/src/main/java/org/wso2/carbon/identity/mgt/mail/DefaultEmailSendingModule.java#L73
[3]. https://github.com/wso2/carbon-analytics-common/blob/5.2.x/components/event-publisher/event-output-adapters/org.wso2.carbon.event.output.adapter.email/src/main/java/org/wso2/carbon/event/output/adapter/email/EmailEventAdapter.java
[4]. http://mail.wso2.org/mailarchive/architecture/2019-September/032587.html
[5].https://github.com/wso2-extensions/identity-event-handler-notification/blob/master/components/event-handler-notification/org.wso2.carbon.identity.event.handler.notification/src/main/java/org/wso2/carbon/identity/event/handler/notification/DefaultNotificationHandler.java#L284
[6]. https://docs.wso2.com/display/IS580/Configuring+Email+OTP
[7]. https://github.com/wso2-extensions/identity-outbound-auth-email-otp/pull/26/files#diff-868475e354da25fd06fae3b3a9ebe6e5R272
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails 3 extract the domain of a link with a regex and print it in parens,
Rails -v 3.2.3
I'm working with an app that is supposed to display a description & url of a submitted link, I am using regex operators, which is something i am very new too. here is my views code:
(<%= if link.url =~ /(:\/\/) ([^\/]*)/ then $2 else "wrong URL" end %>)
however with every link i submit, the url is always wrong URL....
is this because $2 is the wrong regex operator? or is the /(:\/\/) ([^\/]*)/ section incorrect in Rails 3?
A:
Kill that space in the middle! The regex you're showing expects a space between the :// and the sub.domain.tld chunks; since no URLs have that, the regex won't match anything. The simplest change should be:
/(:\/\/)([^\/]*)/
Or, to clean it up a little more (you don't need the first pair of parentheses):
(<%= if link.url =~ /:\/\/([^\/]*)/ then $1 else "wrong URL" end %>)
Hope that helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Update value in JSON object using NodeJS
I've got a JSON object that's being submitted to a AWS Lambda NodeJS function. This JSON object has an apostrophe in one of the fields that I need to escape before it's being inserted into a MySQL database.
The object needs to stay intact as it's being stored as a JSON object in the database. I've looked at string replace functions but those won't work since this is a JSON object natively.
I'm sure there is a simple answer here, I'm just very new to NodeJS and haven't found a way after searching around for a few hours. Thanks in advance!
The field I need to update is 2.1 below:
Example of the BAD JSON Object:
{
"field1": "ABCD1234DEFG4567",
"field2": "FBI",
"fieldgroup": {
"1.1": "ABCD",
"1.2": 20170721,
"1.3": "ABCD",
"2.1": "L'astName, FirstName M"
}
}
Example of the FINAL JSON object:
{
"field1": "ABCD1234DEFG4567",
"field2": "FBI",
"fieldgroup": {
"1.1": "ABCD",
"1.2": 20170721,
"1.3": "ABCD",
"2.1": "L''astName, FirstName M"
}
}
A:
const o = {
"field1": "ABCD1234DEFG4567",
"field2": "FBI",
"fieldgroup": {
"1.1": "ABCD",
"1.2": 20170721,
"1.3": "ABCD",
"2.1": "L'astName, FirstName M"
}
};
const preparedObject = prepare(o);
console.log(JSON.stringify(preparedObject, null, 4));
function prepare(o) {
const replacedStrings = Object.keys(o)
.filter(key => typeof o[key] === "string")
.reduce((accu, key) => ({ ...accu, [key]: o[key].replace("'", "''") }), {});
const preparedChildren = Object.keys(o)
.filter(key => typeof o[key] === "object")
.reduce((accu, key) => ({ ...accu, [key]: prepare(o[key]) }), {});
return { ...o, ...replacedStrings, ...preparedChildren };
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Simple probability inequality to show
How can I show that
$ P(A \cup B) P(A\cap B) \le P(A) P(B)$ for any events A and B?
I have tried using the inclusion/exclusion principle and using conditional probability but I keep going round in circles.
Thanks
A:
Let $X=A\backslash B$, $Y=A\cap B$, $Z=B\backslash A$ be three disjoint events and $x=P(X)$, $y=P(Y)$, $z=P(Z)$ ($x,y,z \geq 0$).
Then:
$$P(A)=x+y\\P(B)=y+z\\P(A\cup B)=x+y+z\\P(A\cap B)=y$$
So
$$P(A)P(B)-P(A\cup B)P(A\cap B)=\\(x+y)(y+z)-y(x+y+z)=\\xy+xz+y^2+yz-xy-y^2-yz = \\ xz \geq 0$$
Thus:
$$P(A)P(B)\geq P(A\cup B)P(A\cap B)$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Difference between Formatter and Factory Function
Hello,
Please explain what is the difference between Factory and Formatter function. Because as I see both can be used to format or manipulate the output results. How to choose between both of them ?
Regards,
Mayank
A:
Factory functions allows you to create different types of controls in runtime. Let's assume that you have a list and you want to display different type of list items according to your list index for instance, or maybe to some value that you have in your model. Factory functions allows you to do it in the binding way.
Formatters are some kind of an helper functions which receive and input and return an output. The most popular examples are date and time that you receive date in form A and return date in form B. formatter functions are defined on a property level so if you have a field in your list item which display a date you can use formatter in order to do a very simple manipulation to this date
| {
"pile_set_name": "StackExchange"
} |
Q:
How to set margin of ImageView using code, not xml
I want to add an unknown number of ImageView views to my layout with margin. In XML, I can use layout_margin like this:
<ImageView android:layout_margin="5dip" android:src="@drawable/image" />
There is ImageView.setPadding(), but no ImageView.setMargin(). I think it's along the lines of ImageView.setLayoutParams(LayoutParams), but not sure what to feed into that.
Does anyone know?
A:
android.view.ViewGroup.MarginLayoutParams has a method setMargins(left, top, right, bottom). Direct subclasses are: FrameLayout.LayoutParams, LinearLayout.LayoutParams and RelativeLayout.LayoutParams.
Using e.g. LinearLayout:
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT);
lp.setMargins(left, top, right, bottom);
imageView.setLayoutParams(lp);
MarginLayoutParams
This sets the margins in pixels. To scale it use
context.getResources().getDisplayMetrics().density
DisplayMetrics
A:
image = (ImageView) findViewById(R.id.imageID);
MarginLayoutParams marginParams = new MarginLayoutParams(image.getLayoutParams());
marginParams.setMargins(left_margin, top_margin, right_margin, bottom_margin);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(marginParams);
image.setLayoutParams(layoutParams);
A:
All the above examples will actually REPLACE any params already present for the View, which may not be desired. The below code will just extend the existing params, without replacing them:
ImageView myImage = (ImageView) findViewById(R.id.image_view);
MarginLayoutParams marginParams = (MarginLayoutParams) image.getLayoutParams();
marginParams.setMargins(left, top, right, bottom);
| {
"pile_set_name": "StackExchange"
} |
Q:
why AJAX redirects to the new page in PHP
I have a form:
<form class="searchForm">
<div class="box_style_1">
<h4><?= Yii::t("common", "Age"); ?></h4>
<?
echo '<b class="badge">3</b> ' . Slider::widget([
'name'=>'age',
'value'=>'250,650',
'sliderColor'=>Slider::TYPE_GREY,
'pluginOptions'=>[
'min'=>3,
'max'=>21,
'step'=>1,
'range'=>true
],
]) . ' <b class="badge">21</b>';
?>
<br /><br />
<input type="submit" value="Search" class="searchByAge"/>
<br /><br />
</div>
</form>
And want to show the result in console.log:
$('.searchByAge').on('click', function(e){
e.preventDefault();
var range = $('.form-control').val();
var min = range.split(',')[0];
var max = range.split(',')[1];
//alert(min+' '+max);
$.ajax({
type: 'POST',
url: '/age/'+min+'/'+max,
data: $('.searchForm').serialize(),
success: function (data) {
console.log(data);
},
error: function(jqXHR, textStatus, errorMessage) {
console.log(errorMessage); // Optional
}
});
})
and that's my AJAX code. But when I click on Search button it redirects me to the new page and nothing in the console log. I do not know what is wrong in my code.
I return a JSON from the '/age/min_age/max_age' page, but the result shows in the new page.
How can I fix this problem?
A:
change code to below. change input type submit to button
<input type="button" value="Search" class="searchByAge"/>
also wrap your code in $(document).ready();
Make sure to add jQuery library from correct path.
$(document).ready(function(){
$('.searchByAge').on('click', function(e){
e.preventDefault();
var range = $('.form-control').val();
var min = range.split(',')[0];
var max = range.split(',')[1];
//alert(min+' '+max);
$.ajax({
type: 'POST',
url: '/age/'+min+'/'+max,
data: $('.searchForm').serialize(),
success: function (data) {
console.log(data);
},
error: function(jqXHR, textStatus, errorMessage) {
console.log(errorMessage); // Optional
}
});
});
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Only one field appears after converting .shp to raster to .asc, in QGIS
I am a beginner in the GIS world. I tried to convert the .shp map to the raster .asc map. I did Raster-> Conversion-> Rasterize then as shown below there are only 1 of 4 fields appears. So I cannot convert the map based on the field that I want.
A:
You can only use a numeric field.
What is the type of the other fields?
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a method in the LinkedIn Rest API to see what companies a person is following?
Using the Linkedin Rest API, it's possible to pull a bunch of fields from their profile, but I can't find a method that would pull the companies that they're following. Does one exist?
A:
In general, the fields available for LinkedIn profiles can be retrieved using the /v1/people endpoint, depicted here: https://developer-programs.linkedin.com/documents/profile-api
But unfortunately there is no option to get the companies a user follows.
| {
"pile_set_name": "StackExchange"
} |
Q:
Utilising LibVLC MediaPlayer.Event.EndReached to reset MediaPlayer when playback finishes
I'm in the process of writing an Android app activity that houses a LibVLC MediaPlayer implementation. The MediaPlayer works fine for the most part, however upon video conclusion, the MediaPlayer will become unresponsive. From my research, it looks like this could be because the Media is getting unset upon MediaPlayer.Event.EndReached firing (vajehu).
I've been keeping an eye on MediaPlayer.getPlaybackState() and can see that the MediaPlayer object is sitting in the "Ended" state when playback concludes, as expected.
I can go ahead and release my MediaPlayer and re-create it when MediaPlayer.Event.EndReached is fired, but am unsure if this is a good course of action. I am hoping to have the MediaPlayer move back to the beginning of the video and await user input to commence playback again.
(In case it's pertinent - I'm utilising MrMaffen's vlc-android-sdk).
A:
I've since discovered a neat and tidy (and more importantly efficient!) solution for this;
Upon MediaPlayer.Event.EndReached firing I:
Call MediaPlayer.setMedia(media) to reload the Media object
Reset a few UI elements relating to my MediaPlayer
Finally I set the MediaPlayer position to the start of the Media object with MediaPlayer.setTime(0)
Side note: since LibVLC's MediaPlayer.setTime(Long position) method doesn't have an effect unless the MediaPlayer.isPlaying(), I needed to write a small wrapper method to asynchronously:
MediaPlayer.play() and wait for MediaPlayer.isPlaying()
Then MediaPlayer.setTime(0)
Finally MediaPlayer.pause()
A much simpler solution than I expected, though I hope this helps anyone who might be scratching their head whilst working on the same type of project.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is this query doing a full table scan?
The query:
SELECT tbl1.*
FROM tbl1
JOIN tbl2
ON (tbl1.t1_pk = tbl2.t2_fk_t1_pk
AND tbl2.t2_strt_dt <= sysdate
AND tbl2.t2_end_dt >= sysdate)
JOIN tbl3 on (tbl3.t3_pk = tbl2.t2_fk_t3_pk
AND tbl3.t3_lkup_1 = 2577304
AND tbl3.t3_lkup_2 = 1220833)
where tbl2.t2_lkup_1 = 1020000002981587;
Facts:
Oracle XE
tbl1.t1_pk is a primary key.
tbl2.t2_fk_t1_pk is a foreign key on that t1_pk column.
tbl2.t2_lkup_1 is indexed.
tbl3.t3_pk is a primary key.
tbl2.t2_fk_t3_pk is a foreign key on that t3_pk column.
Explain plan on a database with 11,000 rows in tbl1 and 3500 rows in
tbl2 shows that it's doing a full table scan on tbl1. Seems to me that
it should be faster if it could do a index query on tbl1.
Explain plan on a database with 11,000 rows in tbl1 and 3500 rows in
tbl2 shows that it's doing a full table scan on tbl1. Seems to me that
it should be faster if it could do a index query on tbl1.
Update: I tried the hint a few of you suggested, and the explain cost got much worse! Now I'm really confused.
Further Update: I finally got access to a copy of the production database,
and "explain plan" showed it using indexes and with a much lower cost
query. I guess having more data (over 100,000 rows in tbl1 and 50,000 rows
in tbl2) were what it took to make it decide that indexes were worth it. Thanks to everybody who helped. I still think Oracle performance tuning is a black art, but I'm glad some of you understand it.
Further update: I've updated the question at the request of my former employer. They don't like their table names showing up in google queries. I should have known better.
A:
The easy answer: Because the optimizer expects more rows to find then it actually does find.
Check the statistics, are they up to date?
Check the expected cardinality in the explain plan do they match the actual results? If not fix the statistics relevant for that step.
Histogramms for the joined columns might help. Oracle will use those to estimate the cardinality resulting from a join.
Of course you can always force index usage with a hint
A:
It would be useful to see the optimizer's row count estimates, which are not in the SQL Developer output you posted.
I note that the two index lookups it is doing are RANGE SCAN not UNIQUE SCAN. So its estimates of how many rows are being returned could easily be far off (whether statistics are up to date or not).
My guess is that its estimate of the final row count from the TABLE ACCESS of TBL2 is fairly high, so it thinks that it will find a large number of matches in TBL1 and therefore decides on doing a full scan/hash join rather than a nested loop/index scan.
For some real fun, you could run the query with event 10053 enabled and get a trace showing the calculations performed by the optimizer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Drawing a rotating triangle
I want to fill a triangle in Android using the Canvas class. The way I am doing currently works but is very laggy. I wanted to know if anybody has a faster way of doing it than my way. Thanks!
My code:
public void rotate(float angle){
if(neighbour == null)
return;
path.reset();
Point origin = rotatePoint(neighbour.getX() + 64, neighbour.getY() + 128 + 16, neighbour.getX() + 64, neighbour.getY() + 64, angle);
Point a = rotatePoint(neighbour.getX() + 64, neighbour.getY() + 128 + neighbour.getWidth() + neighbour.getHeight(), neighbour.getX() + 64, neighbour.getY() + 64, angle - 15);
Point b = rotatePoint(neighbour.getX() + 64, neighbour.getY() + 128 + neighbour.getWidth() + neighbour.getHeight(), neighbour.getX() + 64, neighbour.getY() + 64, angle + 15);
path.moveTo(origin.x, origin.y);
path.lineTo(a.x, a.y);
path.lineTo(b.x, b.y);
}
neighbour is just a class that holds x and y values.
Rotate point method:
private Point rotatePoint(float x, float y, float px, float py, float angle){
float s = (float)Math.sin(Math.toRadians(angle));
float c = (float)Math.cos(Math.toRadians(angle));
x -= px;
y -= py;
float xnew = x * c - y * s;
float ynew = x * s + y * c;
x = xnew + px;
y = ynew + py;
return new Point((int)x, (int)y);
}
This triangle will be rotated quite frequently so I need a efficient way of doing it.
A:
You can just draw the triangle always with the same path, but before drawing the path rotate the canvas to the desired rotation angle.
canvas.save();
canvas.rotate(degrees);
//draw your triangle here
canvas.restore();
There is also a
canvas.rotate(degrees, x, y);
if you need to give it a pivot point.
| {
"pile_set_name": "StackExchange"
} |
Q:
do you want to open or save from localhost while trying to upload the file
I am trying to upload the file on certain path.
I have written following code for this:
try
{
if (!System.IO.Directory.Exists(fileLocation))
System.IO.Directory.CreateDirectory(fileLocation);
// file.SaveAs(completefilepathWithFile);
file.SaveAs(FileLocationToSaveInDB);
return Json("File Uploaded Sucessfully");
}
catch (Exception)
{
return json("Failed to upload the file");
}
This code works fine for Firefox and crome.
But gives me error for IE9.
It prompts me for:
Do you want to openor save (methodname) from localhost?
Its as below:
I tried with:
localhost doesn't open in IE9
But didnt helped.
Please help me.
A:
Many browsers can't handle application/json as the return content type.You can hack the response and sent back the content using the mime type text/html.
try this:
return Json("FileUploaded successfully", "text/html", System.Text.Encoding.UTF8,
JsonRequestBehavior.AllowGet);
| {
"pile_set_name": "StackExchange"
} |
Q:
React Native retrieving API data source.uri should not be an empty string
I am trying to retrieve data from an API (https://developers.zomato.com/documentation) and get title of restaurants and an image with it. However when I try to load an image I get a warning source.uri should not be an empty string.
Here is my code as it stands:
async componentDidMount() {
let id = this.props.navigation.state.params.category
let result;
try {
result = await axios.request({
method: 'get',
url: `https://developers.zomato.com/api/v2.1/search?category=${id}`,
headers: {
'Content-Type': 'application/json',
'user-key': "a31bd76da32396a27b6906bf0ca707a2",
},
})
} catch (err) {
err => console.log(err)
}
this.setState({
isLoading: false,
data: result.data.restaurants
})
}
render() {
return (
<View>
{
this.state.isLoading ?
<View style={{ flex: 1, padding: 20 }}>
<ActivityIndicator style={{color:'red'}} />
</View> :
(
this.state.data.length == 0 ?
<View style={{ flex: 1, padding: 20 }}>
<Text style={{ color: '#000', fontWeight: 'bold' }}>No restaurants from selected category</Text>
</View> :
<FlatList
style={{ marginBottom: 80 }}
keyExtractor={item => item.id}
data={this.state.data}
renderItem={({ item }) =>
<TouchableHighlight onPress={()=> console.log(item.restaurant.thumb)}>
<Card image={item.restaurant.thumb} style={styles.container}>
<Image resizeMode='contain' source={{ uri: item.restaurant.thumb }}/>
<Text style={{color:'#000',fontWeight:'bold'}}>{item.restaurant.name} </Text>
</Card>
</TouchableHighlight>
}
/>
)
}
</View>
);
}
as you can see when I touch the any of the cards I am console logging the link of the image uri and it shows up perfectly. Why is that when the app loads the images they are empty strings yet when I load it though console log the link shows up perfectly?
I am using axios to load my API
here is an expo snack link: https://snack.expo.io/r1XTaw4JU
A:
So i got 2 issues, one is in the card component you were not providing the uri properly it should be image={{uri:item.restaurant.thumb}} and secondly for newyork your entity id must be
To search for 'Italian' restaurants in 'Manhattan, New York City',
set cuisines = 55, entity_id = 94741 and entity_type = zone
Its as per zomato docs,so do check out that. and find expo link : expo-snack
import React from 'react';
import {
View,
Text,
FlatList,
StyleSheet,
Button,
TouchableHighlight,
ActivityIndicator,
} from 'react-native';
import { createAppContainer } from 'react-navigation';
import {createStackNavigator} from 'react-navigation-stack';
import { Card, Image } from 'react-native-elements';
import Constants from 'expo-constants';
import axios from 'axios';
export default class CategoryScreen extends React.Component {
constructor(props){
super(props);
this.state={
data : [],
isVisible: true,
city : '94741'
}
}
async componentDidMount() {
let id = "3"
let city = this.state.city
let result;
try {
result = await axios.request({
method: 'get',
url: `https://developers.zomato.com/api/v2.1/search?entity_id=${city}&entity_type=zone&category=${id}`,
headers: {
'Content-Type': 'application/json',
'user-key': "a31bd76da32396a27b6906bf0ca707a2",
},
})
} catch (err) {
err => console.log(err)
}
this.setState({
isLoading: false,
data: result.data.restaurants
})
console.log(result)
console.log(data)
}
render() {
return (
<View>
{
this.state.isLoading ?
<View style={{ flex: 1, padding: 20 }}>
<ActivityIndicator style={{color:'red'}} />
</View> :
(
this.state.data.length == 0 ?
<View style={{ flex: 1, padding: 20 }}>
<Text style={{ color: '#000', fontWeight: 'bold' }}>No restaurants from selected category</Text>
</View> :
<FlatList
style={{ marginBottom: 80 }}
keyExtractor={item => item.id}
data={this.state.data}
renderItem={({ item }) =>
<TouchableHighlight onPress={()=> alert(item.restaurant.location.city)}>
<Card image={{uri:item.restaurant.thumb}} style={styles.container}>
<Text style={{color:'#000',fontWeight:'bold'}}>{item.restaurant.name} </Text>
</Card>
</TouchableHighlight>
}
/>
)
}
</View>
);
}
};
const styles = StyleSheet.create({
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Limits - prove or disprove
If $\lim_{x \to 0^+} f(x) = 0$ and $(\forall x>0)( \exists 0<c_x<x)$ s. t. $f(c_x)>f(x)$, and $\forall x>0, f(x)>0$ do we have a contradiction?
I tried to build a sequence of x values that approaches $0$ but its f values form an ascending sequence but failed to show that it is actually ascending.
A:
If $f$ is allowed to be discontinuous in every right neighbourhood of $0$, then the condition does not prevent $\lim_{x\to 0^+}f(x)=0$. For instance, consider the function $f:(0,1]\to\Bbb R$ such that $f(x)=\frac1{n-1}+\frac1n-x$ for all $n\in\Bbb N$, $n\ge2$ and for all $x$ such that $\frac1n<x\le \frac1{n-1}$. Namely, $$f(x)=\left\lfloor x^{-1}\right\rfloor^{-1}+\left(\left\lfloor x^{-1}\right\rfloor+1\right)^{-1}-x$$
On the other hand, if there is some $\varepsilon>0$ such that $\left.f\right\rvert_{(0,\varepsilon)}$ is continuous, then yes, that condition prevents $\lim_{x\to 0^+}f(x)=0$. In fact, consider $0<x_0<\varepsilon$ and let $\delta=\inf\{\alpha>0\,:\, f(\alpha)\ge f(x_0)\}$. If $\delta>0$, then by continuity $f(\delta)\ge f(x_0)$ and, therefore, there must be some $0<c_\delta<\delta$ such that $f(c_\delta)\ge f(\delta)\ge f(x_0)$, against $\delta$ being the greatest lower bound. Therefore $\delta=0$, but then $\limsup_{x\to 0^+}f(x)\ge f(x_0)> 0$.
| {
"pile_set_name": "StackExchange"
} |
Q:
C Compile Fatal Error 'file not found' from ImageMagick Install Mac OS
I am trying to compile a C program that requires Image Magick's MagickWand.h:
#include "wand/MagickWand.h"
But my Image Magick was installed through Homebrew on my Mac so I changed the include to the actual location:
#include </usr/local/Cellar/imagemagick/6.8.9-7/include/ImageMagick-6/wand/MagickWand.h>
However, when I compiled the program I received the following error:
/usr/local/Cellar/imagemagick/6.8.9-7/include/ImageMagick-6/wand/MagickWand.h:71:10: fatal error: 'wand/method-attribute.h' file not found
#include "wand/method-attribute.h"
Now I've been going into the .h files when this error crops up and changing their #includes so that they are pointed correctly (because that appears to be the problem), but there is always a new error here and I'd rather not spend hours manually updating these because of a Homebrew install. Does anyone have any suggestions on how to fix this without manually updating each file? I'm not sure exactly what the problem is so perhaps there is a more elegant solution.
A:
Your code should include the MagickWand library as system headers, and keep the generic path. This will keep your future compiling from breaking when the system/library updates.
#include <wand/MagickWand.h>
Tell your C compiler where homebrew installed ImageMagick by setting the preprocessor include flag -I, and linking/library options with the -L & -l flags.
example:
clang -I/usr/local/Cellar/imagemagick/6.8.9-7/include/ImageMagick-6 \
myProject.c -o myProject.o \
-L/usr/local/Cellar/imagemagick/6.8.9-7/lib \
-lMagickWand-6.Q16 \
-lMagickCore-6.Q1
To simplify the whole process, ImageMagick ships MagickWand-config utility. This will take care of libs, includes, and definitions for you.
example:
CFLAGS=$(MagickWand-config --cflags)
LFLAGS=$(MagickWand-config --libs)
clang $CFLAGS myProject.c -o myProject.o $LFLAGS
| {
"pile_set_name": "StackExchange"
} |
Q:
How to convert from huge JSON file to xml file in C#
I'm trying to convert from a huge JSON file(2GB) to xml file. I have some troubles reading the huge JSON file.
I've been researching about how i can read huge JSON files.
I found this:
Out of memory exception while loading large json file from disk
How to parse huge JSON file as stream in Json.NET?
Parsing large json file in .NET
It seems that i'm duplicating my question but i have some troubles which aren't solved in these posts.
So, i need to load the huge JSON File and the community propose something like this:
MyObject o;
using (StreamReader sr = new StreamReader("foo.json"))
using (JsonTextReader reader = new JsonTextReader(sr))
{
var serializer = new JsonSerializer();
reader.SupportMultipleContent = true;
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartObject)
{
// Deserialize each object from the stream individually and process it
var o = serializer.Deserialize<MyObject>(reader);
//Do something with the object
}
}
}
So, We can read by parts and deserialize objects one by one.
I'll show you my code
JsonSerializer serializer = new JsonSerializer();
string hugeJson = "hugJSON.json";
using (FileStream s = File.Open(hugeJson , FileMode.Open))
{
using (StreamReader sr = new StreamReader(s))
{
using (JsonReader reader = new JsonTextReader(sr))
{
reader.SupportMultipleContent = true;
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartObject)
{
var jsonObject = serializer.Deserialize(reader);
string xmlString = "";
XmlDocument doc = JsonConvert.DeserializeXmlNode(jsonObject.ToString(), "json");
using (var stringWriter = new StringWriter())
{
using (var xmlTextWriter = XmlWriter.Create(stringWriter))
{
doc.WriteTo(xmlTextWriter);
xmlTextWriter.Flush();
xmlString = stringWriter.GetStringBuilder().ToString();
}
}
}
}
}
}
}
But when i try doc.WriteTo(xmlTextWriter), i get Exception of type System.OutOfMemoryException was thrown.
I've been trying with BufferedStream. This class allows me manage big files but i have another problem.
I'm reading in byte[] format. When i convert to string, the json is splitted and i can't parse to xml file because there are missing characters
for example:
{ foo:[{
foo:something,
foo1:something,
foo2:something
},
{
foo:something,
foo:som
it is cutted.
Is any way to read a huge JSON and convert to XML without load the JSON by parts? or i could load a convert by parts but i don't know how to do this.
Any ideas?
UPDATE:
I have been trying with this code:
static void Main(string[] args)
{
string json = "";
string pathJson = "foo.json";
//Read file
string temp = "";
using (FileStream fs = new FileStream(pathJson, FileMode.Open))
{
using (BufferedStream bf = new BufferedStream(fs))
{
byte[] array = new byte[70000];
while (bf.Read(array, 0, 70000) != 0)
{
json = Encoding.UTF8.GetString(array);
temp = String.Concat(temp, json);
}
}
}
XmlDocument doc = new XmlDocument();
doc = JsonConvert.DeserializeXmlNode(temp, "json");
using (var stringWriter = new StringWriter())
using (var xmlTextWriter = XmlWriter.Create(stringWriter))
{
doc.WriteTo(xmlTextWriter);
xmlTextWriter.Flush();
xmlString = stringWriter.GetStringBuilder().ToString();
}
File.WriteAllText("outputPath", xmlString);
}
This code convert from json file to xml file. but when i try to convert a big json file (2GB), i can't. The process cost a lot of time and the string doesn't have capacity to store all the json. How i can store it? Is any way to do this conversion without use the datatype string?
UPDATE:
The json format is:
[{
'key':[some things],
'data': [some things],
'data1':[A LOT OF ENTRIES],
'data2':[A LOT OF ENTRIES],
'data3':[some things],
'data4':[some things]
}]
A:
Out-of-memory exceptions in .Net can be caused by several problems including:
Allocating too much total memory.
If this might be happening, check whether you are running in 64-bit mode as described here. If not, rebuild in 64-bit mode as described here and re-test.
Allocating too many objects on the large object heap causing memory fragmentation.
Allocating a single object that is larger than the .Net object size limit.
Failing to dispose of unmanaged memory (not applicable here).
In your case, you may be trying to allocate too much total memory but are definitely allocating three very large objects: the in-memory temp JSON string, the in-memory xmlString XML string and the in-memory stringWriter.
You can substantially reduce your memory footprint and completely eliminate these objects by constructing an XDocument or XmlDocument directly via a streaming translation from the JSON file. Then afterward, write the document directly to the XML file using XDocument.Save() or XmlDocument.Save().
To do this, you will need to allocate your own XmlNodeConverter, then construct a JsonSerializer using it and deserialize as shown in Deserialize JSON from a file. The following method(s) do the trick:
public static partial class JsonExtensions
{
public static XDocument LoadXNode(string pathJson, string deserializeRootElementName)
{
using (var stream = File.OpenRead(pathJson))
return LoadXNode(stream, deserializeRootElementName);
}
public static XDocument LoadXNode(Stream stream, string deserializeRootElementName)
{
// Let caller dispose the underlying streams.
using (var textReader = new StreamReader(stream, Encoding.UTF8, true, 1024, true))
return LoadXNode(textReader, deserializeRootElementName);
}
public static XDocument LoadXNode(TextReader textReader, string deserializeRootElementName)
{
var settings = new JsonSerializerSettings
{
Converters = { new XmlNodeConverter { DeserializeRootElementName = deserializeRootElementName } },
};
using (var jsonReader = new JsonTextReader(textReader) { CloseInput = false })
return JsonSerializer.CreateDefault(settings).Deserialize<XDocument>(jsonReader);
}
public static void StreamJsonToXml(string pathJson, string pathXml, string deserializeRootElementName, SaveOptions saveOptions = SaveOptions.None)
{
var doc = LoadXNode(pathJson, deserializeRootElementName);
doc.Save(pathXml, saveOptions);
}
}
Then use them as follows:
JsonExtensions.StreamJsonToXml(pathJson, outputPath, "json");
Here I am using XDocument instead of XmlDocument because I believe (but have not checked personally) that it uses less memory, e.g. as reported in Some hard numbers about XmlDocument, XDocument and XmlReader (x86 versus x64) by Ken Lassesen.
This approach eliminates the three large objects mentioned previously and substantially reduces the chance of running out of memory due to problems #2 or #3.
Demo fiddle here.
If you are still running out of memory even after ensuring you are running in 64-bit mode and streaming directly from and to your file(s) using the methods above, then it may simply be that your XML is too large to fit in your computer's virtual memory space using XDocument or XmlDocument. If that is so, you will need to adopt a pure streaming solution that transforms from JSON to XML on the fly as it streams. Unfortunately, Json.NET does not provide this functionality out of the box, so you will need a more complex solution.
So, what are your options?
You could fork your own version of XmlNodeConverter.cs and rewrite ReadElement(JsonReader reader, IXmlDocument document, IXmlNode currentNode, string propertyName, XmlNamespaceManager manager) to write directly to an XmlWriter instead of an IXmlDocument.
While probably doable with a couple days effort, the difficulty would seem to exceed that of a single stackoverflow answer.
You could use the reader returned by JsonReaderWriterFactory to translate JSON to XML on the fly, and pass that reader directly to XmlWriter.WriteNode(XmlReader). The readers and writers returned by this factory are used internally by DataContractJsonSerializer but can be used directly as well.
If your JSON has a fixed schema (which is unclear from your question) you have many more straightforward options. Incrementally deserializing to some c# data model as shown in Parsing large json file in .NET and re-serializing that model to XML is likely to use much less memory than loading into some generic DOM such as XDocument.
Option #2 can be implemented very simply, as follows:
using (var stream = File.OpenRead(pathJson))
using (var jsonReader = JsonReaderWriterFactory.CreateJsonReader(stream, XmlDictionaryReaderQuotas.Max))
{
using (var xmlWriter = XmlWriter.Create(outputPath))
{
xmlWriter.WriteNode(jsonReader, true);
}
}
However, the XML thereby produced is much less pretty than the XML generated by XmlNodeConverter. For instance, given the simple input JSON
{"Root":[{
"key":["a"],
"data": [1, 2]
}]}
XmlNodeConverter will create the following XML:
<json>
<Root>
<key>a</key>
<data>1</data>
<data>2</data>
</Root>
</json>
While JsonReaderWriterFactory will create the following (indented for clarity):
<root type="object">
<Root type="array">
<item type="object">
<key type="array">
<item type="string">a</item>
</key>
<data type="array">
<item type="number">1</item>
<item type="number">2</item>
</data>
</item>
</Root>
</root>
The exact format of the XML generated can be found in
Mapping Between JSON and XML.
Still, once you have valid XML, there are streaming XML-to-XML transformation solutions that will allow you to transform the generated XML to your final, desired format, including:
C# XSLT Transforming Large XML Files Quickly.
How to: Perform Streaming Transform of Large XML Documents (C#).
Combining the XmlReader and XmlWriter classes for simple streaming transformations.
Is it possible to do the other way?
Unfortunately
JsonReaderWriterFactory.CreateJsonWriter().WriteNode(xmlReader, true);
isn't really suited for conversion of arbitrary XML to JSON as it only allows for conversion of XML with the precise schema specified by Mapping Between JSON and XML.
Furthermore, when converting from arbitrary XML to JSON the problem of array recognition exists: JSON has arrays, XML doesn't, it only has repeating elements. To recognize repeating elements (or tuples of elements where identically named elements may not be adjacent) and convert them to JSON array(s) requires buffering either the XML input or the JSON output (or a complex two-pass algorithm). Mapping Between JSON and XML avoids the problem by requiring type="object" or type="array" attributes.
| {
"pile_set_name": "StackExchange"
} |
Q:
remove mirai virus on router
I need some help with removing the mirai worm on my rounter. Few days ago my ISP was on cyberattacks which it have affected over 100,000 customers who couldn't be able to get access to the internet that got shutdown.
Now it show that my local ISP are CloudMosa in Satatoga, California which is not. My ISP are postoffice in the UK.
I have tried to upgraded the latest firmware version from the manufacturer site which it is 2.00(AAJC.15)C0, I have also set the firewall to a high level to avoid the cyperattack and disabled the upnp but the virus will remove the latest firmware version and it will switch back to the old version V2.00(AAJC.15)O0. The name of the rounter I got is called ZyXEL AMG1302-T10B.
I don't know what i'm supposed to do and how to remove it as the virus keep coming back. I'm scary to use the internet as it could steals my information especially my bank details, username and password.
Do you know how to remove those nasty virus on the router?
A:
I agree that this does not seem to be Mirai, but it doesn't really matter what it is. The solution is the same no matter what.
If a firmware rewrite does not kill it, then just throw the router in the trash and get a new one. I know, it might cost you some money, but it is the only way you can be sure it has not somehow persisted on the device. Just consider the router as broken beyond repair.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scroll div top on page load then set scroll to work normally
I have a chat system on my site and defaulted the message div position to always display the last message on page load. I accomplished this by using the following line of code:
msgDiv = document.getElementById('message_row_large');
msgDiv.scrollTop = msgDiv.scrollHeight;
However, this code sets the scroll position to be equal to the div height at all times, which doesn't allow users to scroll up and see other messages.
I need to re-enable scroll to its default functionality after the page loads. ANy help is welcome.
Thank you!
P.S. I am using ajax to load chat messages. When user clicks on a name on the left hand panel, the chat between him/her and the other person loads on the right hand panel.
A:
Try
$display = $('#message_row_large');
$display.animate({scrollTop: $display[0].scrollHeight }, 'fast');
Working Fiddle:
https://jsfiddle.net/cshanno/3bo48dxj/1/
| {
"pile_set_name": "StackExchange"
} |
Q:
Can someone explain how Salesforce works with CTI? (Avaya)
I have a client who has Salesforce and uses an Avaya switch to run their call center. They enter in random call information into this old legacy program called Omni and once a week they manually update Salesforce with the info that they gathered from calls that week.
I need to track these things:
Length of call in seconds
Audio recording to call (or at least a link to one)
Caller ID (like the auto-generated ID of a user, not necessarily their phone number)
Speed to answer (How long a person is on hold before they get an answer)
What I am confused about is this. I am pretty sure Avaya tracks these things. What I need is to have a way to have all of these things in Salesforce. I'd like to have fields in certain entities that store the things listed above.
So my question is, what is the role of a CTI connector in this process? How does Avaya save it's information for the CTI connector/Salesforce to access it? Maybe these are the wrong questions to ask so if anyone has any insight as to how this works I'd love to know.
A:
Salesforce doesn't provide any way to track these things. As a developer you have to do this. We have setup for Genesys and Avaya in our organization and we have built CTI adapter for salesforce with genesys.
Let me tell you few things, SF doesn't know your dialer. Its your responsibility how you implement it with Open CTI API. Sf will just pop up records which you invoke through Open CTI API. rest you have to decide how you want to handle call flow.
In our Genesys CTI Adapter we track talk, ring, warp, etc in out adapter and later we update same in sfdc via API.
Sfdc API will help you to search, load customer record based on phone number or other parameters. But its your responsibility how you are going to track it.
| {
"pile_set_name": "StackExchange"
} |
Q:
align two words in a select option
I have the select below:
<select name="mySelect">
<option value="1">hello [test]</option>
<option value="2">hello2 [test2]</option>
<option value="3">hello33 [test33]</option>
<option value="4">hel [hel]</option>
</select>
How can I align the text inside the options to appear like this:
hello____[test]
hello2___[test2]
hello33__[test33]
hel_____ [hel]
I tried adding spaces with JavaScript but they are not visible. ( _ is space char above)
A:
The only way to do this would be to firstly set your select element to use a monospace font (that is a font whose characters all have the same width), then to pad out your element with the character entity:
select {
font-family: monospace;
}
<select name="mySelect">
<option value="1">hello [test]</option>
<option value="2">hello2 [test2]</option>
<option value="3">hello33 [test33]</option>
<option value="4">hel [hel]</option>
</select>
As you can see this does make your markup quite ugly. It also forces you to use a font which probably isn't used site-wide and may be undesirable. I believe this is the only way you can accurately achieve this though, without replacing your select element completely with some JavaScript-controlled element strucutre.
Depending on what your hello or [test] texts are supposed to represent, the optgroup element may be what you're looking for:
<select name="mySelect">
<optgroup label="[test]">
<option value="1">hello</option>
</optgroup>
<optgroup label="[test2]">
<option value="2">hello2</option>
</optgroup>
<optgroup label="[test33]">
<option value="3">hello33</option>
</optgroup>
<optgroup label="[hel]">
<option value="4">hel</option>
</optgroup>
</select>
| {
"pile_set_name": "StackExchange"
} |
Q:
Translating BYTE Reserved1[24] to jsctypes
This is the MSDN defintion:
typedef struct _SYSTEM_BASIC_INFORMATION {
BYTE Reserved1[24];
PVOID Reserved2[4];
CCHAR NumberOfProcessors;
} SYSTEM_BASIC_INFORMATION;
This guy converted it to this in js-ctypes:
var SYSTEM_BASIC_INFORMATION = new ctypes.StructType("SYSTEM_BASIC_INFORMATION", [
{'Reserved': ctypes.unsigned_long},
{'TimerResolution': ctypes.unsigned_long},
{'PageSize': ctypes.unsigned_long},
{'NumberOfPhysicalPages': ctypes.unsigned_long},
{'LowestPhysicalPageNumber': ctypes.unsigned_long},
{'HighestPhysicalPageNumber': ctypes.unsigned_long},
{'AllocationGranularity': ctypes.unsigned_long},
{'MinimumUserModeAddress': ctypes.unsigned_long.ptr},
{'MaximumUserModeAddress': ctypes.unsigned_long.ptr},
{'ActiveProcessorsAffinityMask': ctypes.unsigned_long.ptr},
{'NumberOfProcessors': ctypes.char} ]); //CCHAR
I dont understand how he doesnt have 24 entries for BYTE Reserved1[24]; shouldnt he have like:
{'Reserved1_1': BYTE},
{'Reserved1_2': BYTE},
{'Reserved1_3': BYTE},
{'Reserved1_4': BYTE},
....
{'Reserved1_24': BYTE},
A:
For various reasons Microsoft decides that some info should be kept away from developers. But people through reverse engineering find out what these reserved fields are about and produce their own documentation.
Some times people guess correct. Some times Microsoft makes breaking changes, and people scream "How dare you!". And life goes on.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get json data from javascript file
help, im using tumblr and connect my twitter account, Tumblr gave me this file example:http://titan-theme.tumblr.com/tweets.js
my question is can i get follower_count and screen_name data? if yes how to get it?
please help me
thanks
A:
If you have only this file you should define *recent_tweets* function. Of cours you need to import tweets.js. For example:
<script>
var recent_tweets = function(tweets) {
for(var i=0; i<tweets.length; i++) {
var tweet = tweets[i];
var followerCounts = tweet.user.followers_count;
var screenName = tweet.user.screen_name;
}
}
</script>
<script src="tweets.js"></script>
However, there is no follower_count available.
| {
"pile_set_name": "StackExchange"
} |
Q:
PWM flow control and thrust
If I have a water pump and a fast solenoid valve (can be open or closed) and turn on my pump at a constant voltage and apply pulse width modulation to my valve. Would the resulting thrust created at the water output be constant and can it be controlled nicely by the pulse width modulation process?
What would be the equations that could model the reaction of such a setup?
A:
It's not clear, but I think you are asking whether you can control your solenoid valve with pulses instead of steady DC.
Yes, above some frequency, the mechanical system actuated by the solenoid won't "see" individual pulses, just the average. Usually at a higher frequency, the solenoid itself will smooth out the pulses and maintain are more average steady current.
Electrically, this is basically creating a switching power supply that controls the current thru the solenoid, with the solenoid being the inductor of the switching power supply.
This is assuming your "solenoid nozzle" (do you really mean "valve"?) is intended for other than binary on/off operation, often called a proportional valve. Trying to drive a binary mechanism to in-between states may not end well. Constantly banging it between on and off may be even worse.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does a concurrect exception happen to both user
if a user edits a data record and the same time another user edits the same record too and both save.
1.) Will the concurrency exception ALWAYS happen only for one user?
Actually its logical that the first wins but who is the first in a technical aspect... Is it possible both user get this kind of exception?
2.)The one who was too late and getting now the concurrent exception I guess he can access the
new updated data record from the other user yes?
A:
1) I think so yes. One will always be earlier than the other; there is no other way around it. So one update will work as normal, the other will throw the concurrency exception.
This might depend on the data access method you are using, there might be systems that can handle such situations more elegantly. But I doubt there are systems that will give both users the same exception without you building that behaviour on purpose.
As Adam Houldsworth says: this could also depend on the way you code it yourself. You could check for multiple users beginning to edit the same record, and then throw the exception to both. But I do not believe that is what you are actually asking. If so; I misunderstood.
2) Of course this is possible, but this is up to you to build in your application. Just catch the concurrency exception and refresh whatever edit form user B was trying to update. He/she can then try again. Generally speaking obviously; I do not know the specifics of your situation.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to properly chain promises inside of a forEach loop in Javascript
I'm using mongo and need to do an async call for each item inside of a loop. I'd like to execute another command once all of the promises inside of the loop have completed, but so far the promises in the loop seem to be completing after the code that's in the then that's after the loop.
Essentially i'd like the order to be
Loop promises
then
other code
instead of what it is now which is
other code
Loop promises
MongoClient.connect(connecturl)
.then((client) => {
databases.forEach((val) => {
val.collection.forEach((valcol) => {
client.db(val.databasename).stats() //(This is the async call)
.then((stats) => {
//Do stuff here with the stats of each collection
})
})
})
})
.then(() => {
//Do this stuff after everything is finished above this line
})
.catch((error) => {
}
Any assistance would be appreciated on this one.
A:
Assuming the things you are using .forEach() on are iterables (arrays or something like that), you can use async/await to serialize a for/of loop:
MongoClient.connect(connecturl).then(async (client) => {
for (let db of databases) {
for (let valcol of db.collection) {
let stats = await client.db(db.databasename).stats();
// Do stuff here with the stats of each collection
}
}
}).then(() => {
// Do this stuff after everything is finished above this line
}).catch((error) => {
// process error
})
If you wanted to stick with your .forEach() loops, you could make it all work if you did things in parallel and used Promise.all() to know when it's all done:
MongoClient.connect(connecturl).then((client) => {
let promises = [];
databases.forEach((val) => {
val.collection.forEach((valcol) => {
let p = client.db(val.databasename).stats().then((stats) => {
// Do stuff here with the stats of each collection
});
promises.push(p);
});
});
return Promise.all(promises);
}).then(() => {
// Do this stuff after everything is finished above this line
}).catch((error) => {
// process error here
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Stripe: How to set up recurring payments without plan?
First time working with Stripe API. Implementing it on WordPress using PHP and JS.
Working on a donation form. Donor should be able to choose a suggested amount (radio buttons-25,50,75,100) or pay as he/she wishes (text field after selecting 'other'). I was able to get this working.
There is a check box to set the amount up as a recurring payment. I created recurring payment plans for the fixed options like 25, 50, 100 etc.
How do I set up a recurring payment if the donor chooses a custom amount? Can't find the relevant API. Please help.
A:
Another approach that Stripe suggests is to setup a plan with a recurring amount of $1 (or $0.01 for more flexibility) and then vary the quantity as needed.
e.g. Using the $0.01 plan approach, if I wanted to charge 12.50/month I could adjust the quantity like so:
$customer->subscriptions->create(array("plan" => "basic", "quantity" => "1250"));
Stripe Support
How can I create plans that don't have a fixed price?
Subscription Quantities
A:
First, you'll need to create a new customer.
On submit, you could use the custom amount to create a new plan:
$current_time = time();
$plan_name = strval( $current_time );
Stripe_Plan::create(array(
"amount" => $_POST['custom-amount'],
"interval" => "month",
"name" => "Some Plan Name " . $_POST['customer-name'],
"currency" => "usd",
"id" => $plan_name
)
);
Keep in mind that the 'id' needs to be unique. You could use the customer's name, a time stamp, or some other random method to ensure that this is always the case.
You'd then just create the subscription on the newly-added customer:
$customer = Stripe_Customer::retrieve($customer_just_created);
$customer->subscriptions->create(array("plan" => $plan_name));
You probably will be able to omit the first line above, as you should already have a customer variable assigned from when the customer was actually created.
| {
"pile_set_name": "StackExchange"
} |
Q:
grails 3.0.0 I can not create an application
I just downloaded Grails 3.0.0 (hoping to see my problems with CAS magically disappearing ;) )
I installed it under windows and then:
D:\GrailsProjects> grails -version
| Grails Version: 3.0.0
| Groovy Version: 2.4.3
| JVM Version: 1.7.0_51
and then:
D:\IntelliJProjects>grails create-app helloworld
| Error Command not found create-app
Did you mean: create-script or create-taglib or create-unit-test?
also clean and compile don't work
What am I missing?
A:
ok that was trivial (and rather stupid).
In the "project directory" among other projects there was a directory called grails-app, probably leftover of some porting. This caused the create-app to fail. Removed the directory everything works fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Show that $\lim_{x\to \infty} \frac{f(x)}{g(x)}=1$
Consider $f(x)=\ln(\ln(\zeta(\exp(\exp(-x)))))$ and $g(x)=\ln(x),$ where $\zeta(x)=\sum n^{-x}$ for $\Re(x)>1.$
Show that $$\lim_{x\to \infty} \frac{f(x)}{g(x)}=1$$
I obtained $f(x)$ from $\zeta(x)$ after performing two consecutive log-log coordinate transforms on $\zeta(x).$ $f(x)$ appears to converge, quite quickly, to $\ln(x)$ as $x$ increases. I've numerically verified several values such as: $$f(2) \approx 0.69978 $$ $$g(2)\approx 0.69315 $$ $$ f(8)\approx 2.07944 $$ $$ g(8)\approx 2.07944$$
A:
Let $x=-\log(\log(t))$ so that
$$\log\left(\log\left(\zeta\left(e^{e^{-x}}\right)\right)\right)=\log\left(\log\left(\zeta\left(t\right)\right)\right)$$
and as $x\to \infty$, $t\to 1^+$.
Then, we wish to evaluate the limit
$$\lim_{t\to 1^+}\frac{\log\left(\log\left(\zeta\left(t\right)\right)\right)}{\log(-\log(\log(t)))}$$
Recalling that near $t=1^+$, we have $\zeta(t)=\frac1{t-1}+O(1)$ and $\frac1{\log(t)}=\frac1{t-1}+O(1)$, it is straightforward to see that the value of the limit of interest is $1$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use two conditions in "where" clause in XQuery
I'm trying to extract only those <book> data that has a certain type of <xref> type and matching a list of specific xrefs using a Xquery (I'm new to this).
Here is the input data:
<book id="6636551">
<master_information>
<book_xref>
<xref type="Fiction" type_id="1">72771KAM3</xref>
<xref type="Non_Fiction" type_id="2">US72771KAM36</xref>
</book_xref>
</master_information>
</book>
<book id="119818569">
<master_information>
<book_xref>
<xref type="Fiction" type_id="1">070185UL5</xref>
<xref type="Non_Fiction" type_id="2">US070185UL50</xref>
</book_xref>
</master_information>
</book>
<book id="119818568">
<master_information>
<book_xref>
<xref type="Fiction" type_id="1">070185UK7</xref>
<xref type="Non_Fiction" type_id="2">US070185UK77</xref>
</book_xref>
</master_information>
</book>
<book id="119818567">
<master_information>
<book_xref>
<xref type="Fiction" type_id="1">070185UJ0</xref>
<xref type="Non_Fiction" type_id="2">US070185UJ05</xref>
</book_xref>
</master_information>
</book>
<book id="38085123">
<master_information>
<book_xref>
<xref type="Fiction" type_id="1">389646AV2</xref>
<xref type="Non_Fiction" type_id="2">US389646AV26</xref>
</book_xref>
</master_information>
</book>
XQuery that I'm using:
for $x in //book
where $x//xref/@type='Fiction'
and
$x//xref=('070185UL5','070185UJ0')
return $x
The above Xquery only fetches the first book information matching the "070185UL5". I would expect it to fetch both. What is wrong? I appreciate your response.
A:
In the query
for $x in //book
where $x//xref/@type='Fiction'
and
$x//xref=('070185UL5','070185UJ0')
return $x
do you intend to say
(1) "there must be at least one xref whose @type is 'Fiction' and at least one xref whose value is '070185UL5' or'070185UJ0'"
or do you intend to say
(2) "there must be at least one xref whose @type is 'Fiction' and whose value is '070185UL5' or'070185UJ0'"
Currently you are saying (1). If you want to say (2) then the query should be
for $x in //book
where $x//xref[@type='Fiction' and .=('070185UL5','070185UJ0')]
return $x
which you can simplify to the XPath expression
//book[.//xref[@type='Fiction' and .=('070185UL5','070185UJ0')]]
With the data you have supplied the two queries give the same result, but with different data they could give different results.
| {
"pile_set_name": "StackExchange"
} |
Q:
Как сделать ЧПУ с помощью htaccess?
Есть страницы вида site.com/?page=2.
Как с помощью htaccess сделать site.com/page/2?
A:
RewriteCond нужен, чтобы не было бесконечного цикла. Нужный код редиректа можно подставить в конце RewriteRule после пробела, например [R=301]. Если планируете сохранить другие элементы query, то подставьте туда же [R=301, QSA]
RewriteEngine On
RewriteCond %{QUERY_STRING} !page=
RewriteRule page/(.*)$ /?page=$1
| {
"pile_set_name": "StackExchange"
} |
Q:
postgresql "createdb" and "CREATE DATABASE" yield a non-empty database. what the fork?
First of all, I apologize if this question turns out to be painfully obvious, I'm not that postgres-savvy beyond the basics. I use postgresql as a database backend for quite a few django projects that I'm working on, and that's always worked just fine for me. Recently, I set up postgresql on a new machine, and at one point a co worker tried setting up a new project on that machine. Unfortunately, it's too late to go back into the bash history to figure out what he did, and he won't be available for a while to ask him about it. The issue i'm having now is...
I regularly reset postgres databases by simply using a dropdb/createdb command. I've noticed that whenever I run the dropdb command, the database does disappear, but when I run the createdb command next, the resulting database is not empty. It contains tables, and those tables do contain data (which appears to be dummy data from the other project). I realise that i'm a bit of a postgres noob, but is this in some way related to template features in postgres? I don't specify anything like that on the command line, and I'm seeing the exact same results if I drop/create from the psql console.
By the way, I can still wipe the db by dropping and recreating the "public" schema in the database. I'll be glad to add any info necessary to help figure this out, but to be honest I haven't a clue what to look for at this point. Any help would be much appreciated.
A:
Summarizing from the docs template0 is essentially a clean, virgin system database, whereas template1 serves as a blue print for any new database created with the createdb command or create database from a psql prompt (there is no effective difference).
It is probable that you have some tables lurking in template1, which is why they keep reappearing on createdb. You can solve this by dropping template1 and recreating it from template0.
createdb -T template0 template1
The template1 database can be extremely useful. I use Postgis a lot, so I have all of the functions and tables related to that installed in template1, so any new database I create is immediately spatially enabled.
EDIT. As noted in docs, but worth emphasizing, to delete tempate1 you need to have pg_database.datistemplate = false set.
| {
"pile_set_name": "StackExchange"
} |
Q:
Left click only on Dijit MenuItem
using the basic tutorial here as example:
https://dojotoolkit.org/documentation/tutorials/1.10/menus/demo/simpleProgMenu.html
I've noticed that there's no (obvious) way to differentiate between left and right clicks. I'd like right click to do nothing, but left click to call the onClick() on the menuitem.
Inspecting the contents of the event parameter passed to the onClick function, there doesn't appear to be anything telling me which mouse button was clicked.
Is there a way to achieve this?
A:
If you want right click to do nothing, you don't have to do anything special. If you want to handle right clicks you can use the dojo/mouse module and its mouseButtons object.
An example from the documentation:
require(["dojo/mouse", "dojo/on", "dojo/dom"], function(mouse, on, dom){
on(dom.byId("someid"), "click", function(evt){
if (mouse.isLeft(event)){
// handle mouse left click
}else if (mouse.isRight(event)){
// handle mouse right click
}
});
});
| {
"pile_set_name": "StackExchange"
} |
Q:
RenderWindow not working across multiple functions
I'm new to SFML, and have been watching a tutorial that puts everything in a single main function. When making my own program, I tried to split it into multiple functions, but it isn't working properly, can anyone explain why this works:
#include <SFML/Graphics.hpp>
#include <iostream>
int main()
{
sf::RenderWindow window(sf::VideoMode(512, 512), "window", sf::Style::Resize | sf::Style::Close);
while (window.isOpen())
{
sf::Event evnt;
while (window.pollEvent(evnt))
{
if (evnt.type == evnt.Closed)
{
window.close();
}
}
window.clear();
window.display();
}
return 0;
}
and this doesn't:
#include <SFML/Graphics.hpp>
#include <iostream>
sf::RenderWindow window;
void setup()
{
sf::RenderWindow window(sf::VideoMode(512, 512), "window", sf::Style::Resize | sf::Style::Close);
}
int main()
{
setup();
while (window.isOpen())
{
sf::Event evnt;
while (window.pollEvent(evnt))
{
if (evnt.type == evnt.Closed)
{
window.close();
}
}
window.clear();
window.display();
}
return 0;
}
They will both compile and run, but in the former, the window will stay open, and in the latter, it won't.
A:
The window variable that you've declared inside setup() is shadowing the global window. object. Try the following:
void setup()
{
window.create(sf::VideoMode(512, 512), "window", sf::Style::Resize | sf::Style::Close);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Verifying Error after ADT Update
I have a project that worked for months, but I updated the ADT Plugin a few days ago and today all of a sudden the project stopped working. If I try to run it on the device, it throws a VerifyError as soon as it is started.
(the xxx is a replacement for the actual of the project name because I'm not allowed to publish it)
These kind of error repeat themself, so I just post one. The main VerifyError itself is useless since it just points at the main starting Activity.
05-03 18:06:59.898: I/dalvikvm(26640): Could not find method org.osmdroid.views.MapView.enableScroll, referenced from method com.xxx.activities.MainAc.disableSwipe
05-03 18:06:59.898: D/dalvikvm(26640): VFY: replacing opcode 0x6e at 0x0005
05-03 18:06:59.898: D/dalvikvm(26640): VFY: dead code 0x0008-0010 in Lcom/xxx/activities/MainAc;.disableSwipe ()V
05-03 18:06:59.898: W/dalvikvm(26640): VFY: unable to find class referenced in signature (Lorg/osmdroid/util/GeoPoint;)
05-03 18:06:59.898: E/dalvikvm(26640): Could not find class 'org.osmdroid.util.GeoPoint', referenced from method com.xxx.activities.MainAc.displayPointNavigation
05-03 18:06:59.908: W/dalvikvm(26640): VFY: unable to resolve new-instance 575 (Lorg/osmdroid/util/GeoPoint;) in Lcom/xxx/activities/MainAc;
05-03 18:06:59.908: D/dalvikvm(26640): VFY: replacing opcode 0x22 at 0x0018
05-03 18:06:59.908: D/dalvikvm(26640): VFY: dead code 0x001a-0093 in Lcom/xxx/activities/MainAc;.displayPointNavigation (Lorg/osmdroid/util/GeoPoint;)V
05-03 18:06:59.908: W/dalvikvm(26640): Unable to resolve superclass of Lcom/xxx/overlay/MyUpmoveLocationOverlay; (584)
I think the problem is the way I included the osmdroid lib. Since I have to change a lot of osmdroid code, I didn't want to build a jar all the time, so I created a Java Project from the osmdroid source and added the osmdroid Project to my Project's Classpath.
Until now this setup worked like a charm. I thought that I may have changed something myself that caused the VerifyError, so I reverted the project to a revision where it definitly was working but I get the same error.
Are there some settings I have to change to make this work again or do I have to redo the whole setup and include the osmdroid as a jar?
A:
Make OSMDroid be an Android library project and add it as a library to your main application project.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do I get an error from this one level deep OTP tree when I start the child?
Now when I do the following I get an error status from start_child:
{_, pid} = NTree.start_link
{_, cid} = Supervisor.start_child(pid, [])
# {:error, #<PID.0.91.0>}
Why is this? I'm able to send a message like "hello world" to it and get back a printed message via IO.inspect. This means the process is running as far as I can tell.
defmodule NTree do
# this will be the top line supervisor
use Supervisor
def start_link, do: Supervisor.start_link(__MODULE__, :ok, name: __MODULE__)
def init(:ok) do
children = [
worker(TreeWorker, [], restart: :temporary)
]
supervise(children, strategy: :simple_one_for_one)
# {:ok,
# {{:simple_one_for_one, 3, 5},
# [{TreeWorker, {TreeWorker, :start_link, []}, :temporary, 5000, :worker,
# [TreeWorker]}]}}
end
def start_worker(supervisor) do
persister = Supervisor.start_child(supervisor, [])
end
end
defmodule TreeWorker do
def start_link do
spawn(fn -> loop end)
end
def loop do
receive do
:stop -> :ok
msg ->
IO.inspect msg
loop
end
end
end
Process.info(cid) yields:
[current_function: {TreeWorker, :loop, 0}, initial_call: {:erlang, :apply, 2},
status: :waiting, message_queue_len: 0, messages: [], links: [],
dictionary: [], trap_exit: false, error_handler: :error_handler,
priority: :normal, group_leader: #PID<0.26.0>, total_heap_size: 233,
heap_size: 233, stack_size: 1, reductions: 141,
garbage_collection: [min_bin_vheap_size: 46422, min_heap_size: 233,
fullsweep_after: 65535, minor_gcs: 0], suspending: []]
A:
A worker's start function must return {:ok, pid} on success. TreeWorker.start_link/0 returns only pid. You can fix this by returning {:ok, pid}:
defmodule TreeWorker do
def start_link do
{:ok, spawn(fn -> loop end)}
end
...
end
iex(1)> NTree.start_link
{:ok, #PID<0.89.0>}
iex(2)> NTree.start_worker(NTree)
{:ok, #PID<0.91.0>}
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does = NULL not work in Oracle?
I recently realized that we need to use a special syntax IS NULL to compare a literal to NULL.
Why does = NULL not work here?
A:
Take a look at PSOUG's notes on NULL. As Fabricio Araujo hinted, NULL is not really a value like the number 4 or string 'bacon strips'. In fact, NULL is untyped in the SQL language, which is why you cannot validly use it in an equality comparison. You need the special IS [NOT] NULL syntax to check if a value is NULL or not.
A:
In SQL Server, we have an connection setting to get =NULL to behave equally to IS NULL. But in latest versions is not recommended anymore - it's even marked as deprecated.
The recommended is the SQL Standard way - the IS [NOT] NULL operator.
(And I will not start an war whether 'NULL is a value or a status' here)... hehehe
| {
"pile_set_name": "StackExchange"
} |
Q:
Trying to apply Ternary Operator on JSON Data with React
I am trying to apply a Ternary operator to some JSON Data which is held in a separate file locally. Below is the JSON:
[
{
"id": 1,
"company": "Photosnap",
"logo": "./images/photosnap.svg",
"new": true,
"featured": true,
"position": "Senior Frontend Developer",
"role": "Frontend",
"level": "Senior",
"postedAt": "1d ago",
"contract": "Full Time",
"location": "USA Only",
"languages": ["HTML", "CSS", "JavaScript"]
},
{
"id": 2,
"company": "Manage",
"logo": "./images/manage.svg",
"new": true,
"featured": true,
"position": "Fullstack Developer",
"role": "Fullstack",
"level": "Midweight",
"postedAt": "1d ago",
"contract": "Part Time",
"location": "Remote",
"languages": ["Python"],
"tools": ["React"]
},
{
"id": 3,
"company": "Account",
"logo": "./images/account.svg",
"new": true,
"featured": false,
"position": "Junior Frontend Developer",
"role": "Frontend",
"level": "Junior",
"postedAt": "2d ago",
"contract": "Part Time",
"location": "USA Only",
"languages": ["JavaScript"],
"tools": ["React"
Now the issue I have is I conditionally want to show a button dependent on whether "new" is true. The same is said to be with the Featured button.
So I have written a Ternary Operator in my Component.
import React from 'react';
import './job-card.styles.css';
const JobCard = ({company, position, postedAt, contract, location, logo, featured, newJob }) => (
<div className="container">
<div className='card'>
<div className='companyName'>
<img src={logo} alt="logo" width="100" height="100"></img>
</div>
<div className='content'>
{{newJob} ? <button className='myButton'>New!</button> : null }
{{featured} ? <button className='myDarkButton'>Featured</button> : null }
<h2>{company}</h2>
<h1>{position}</h1>
<div className='details'>
<h3>{postedAt} ·</h3>
<h3>{contract} ·</h3>
<h3>{location}</h3>
</div>
</div>
</div>
</div>
)
export default JobCard;
This is just a card component and feeds into another component which displays all the cards.
import React from 'react';
import './job-listing.styles.css';
import JobCard from '../job-card/job-card.component.jsx/job-card.component';
import { Component } from 'react';
class JobListing extends Component {
constructor() {
super();
this.state = {
jobs: []
}
};
componentDidMount() {
fetch('/data.json')
.then(response => response.json())
.then(data => this.setState({jobs: data}))
}
render() {
return (
<div>
{this.state.jobs.map(({id, ...otherJobProps}) =>(
<JobCard key={id} {...otherJobProps} />
))}
</div>
)
}
}
export default JobListing;
The output I am getting is that they are all rendering as true when some of the new or featured are false in the JSON Data. Not sure what I have missed. Any help would be appreciated.
A:
The problem is the inner {}.
{{newJob} ? <button className='myButton'>New!</button> : null }
// ^ here
Within JSX, {} denotes a javascript expression. But once you are within an expression, {} goes back to being normal object syntax. This is throwing off your ternary because you're checking whether an object with key newJob is truthy. Simply removing the brackets would fix it:
{newJob ? <button className='myButton'>New!</button> : null }
Regarding the new issue
I prefer not to destructure props like this, but to get it working most like you already have, destructure the new reserved word into an alias. Here is a simple proof of concept:
let test = [{ new: true }, { new: false }];
test.map(({new: isNew}) => console.log(isNew))
I would prefer to keep the data structured as is. But thats just a preference. It would also avoid the reserved word issue.
let test = [{ new: true }, { new: false }];
test.map((value) => console.log(value.new))
| {
"pile_set_name": "StackExchange"
} |
Q:
queue a azure pipeline yaml stage to execute at specific datetime
We have a multistage release pipeline which targets all environments like dev->int->qa->prod-staging slot.
For final swaping of slot we have a requirement to to run at specified datetime during non-business hrs.
How can we delay a specific stage of multi stage yaml to run at certain datetime.
A:
Though I agree with the idea of Hany, but the link he shared is about the Release which configured with UI. It does not suitable for your multi-stage YAML pipeline.
Since what you are using is multi-stage YAML pipeline, you can check below sample to configure the corresponding schedule trigger into your YAML.
For example, here is the schedule which make the YAML pipeline run at Sunday weekly:
schedules:
- cron: "0 12 * * 0"
displayName: Build on Sunday weekly
branches:
include:
- releases/*
always: true
For 0 12 * * 0, it is following the syntax of:
mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes
I saw you said you want this pipeline run during non-business hours, so you can focus on the last field DW(Days of week). It's available value it 0~6 and starting with Sunday.Or you can input with like Sun:
"0 12 * * Sun"
Check this doc for more details.
| {
"pile_set_name": "StackExchange"
} |
Q:
Your PHP Version Will Be Unsupported in Joomla! 3.3
I am using PHP Version 7.0.12 and Joomla Version 3.7.3.
And I am facing this message "Your PHP Version Will Be Unsupported in Joomla! 3.3"
As per Joomla 3.3 System requirement:
Starting with Joomla! 3.3, the minimum required PHP version is being
raised to PHP 5.3.10 or later!
but I already using later PHP version 7.0.12, why I am still getting this message? is any other thing I have to enable? or anything I missed? please suggest me...
Thanks!
A:
its a bug that will be fixed in the next release - nothing to worry about
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a way to style an element such that it and all its contents will be rendered 50% faded?
I'm working on a userscript that adds a missing feature to a 3rd party website over which I have no control.
It will compute URLs based on the page and the results of some webAPI calls. These URLs will be added into the HTML of the page.
Then for any URL rendered as :visited I wish to set the li element that it is a part of to be "faded" by manipulating the styles.
That's probably more specific detail of my project than necessary to describe my problem, sorry.
The only part I'm not sure how to achieve is how to get the li element (as well as all rendered within it) to be rendered as half-faded into the background colour.
I'm guessing there is probably standard way to do this with modern HTML and CSS.
I'm working with latest stable Chrome and do not need to support old browsers.
A:
Is there a way to style an element such that it and all its contents will be rendered 50% faded?
Opacity
yourelement {
opacity:0.5;
}
This will affect the element and since it is inherited, it will apply to all the children too.
Note however, that if the link is :visited the opacity will have to be set by javascript as there is no CSS "Parent Selector".
| {
"pile_set_name": "StackExchange"
} |
Q:
A major incident was declared
A major incident was declared when winds caused two fires to merge near communication masts on Saturday.
From BBC.com
How should we interpret the main clause that a major incident was declared? Does it mean when winds caused fires and at that time somebody declared an incident or the major incident here is exactly the fire incident?
https://www.bbc.co.uk/news/uk-england-lancashire-44676707
A:
It is part of the procedure of the Fire Brigade.
A small fire will be put out by the local fire brigade. But if the fire becomes more serious the local fire chief will decide to call it a "major incident" and get external help and support.
In this case, there were two small fires. The winds changed and the two small fires joined up to become one large fire. This fire was near some "communication masts". The local fire chief realised that his firefighters couldn't put out the fire alone, so he called the fire a "major incident" and got help from other firefighters.
As the reporter doesn't know who the fire chief is, the reporter uses the passive voice.
| {
"pile_set_name": "StackExchange"
} |
Q:
Forest trees fitting on page
My forest looks like this but it does not fit on a page, this is my first time using this package. I am looking for suggestions to fit this into a page, hopefully they will not be too intensive because I plan on making an even bigger tree.
\documentclass{article}
\usepackage{forest}
\begin{document}
\begin{forest}
for tree={circle,draw, l sep=20pt}
[1,red
[2, edge label={node[midway,left] {A}}
[1,red,edge label={node[midway,left] {B}}
[2,red,edge label={node[midway,right] {C}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red,edge label={node[midway,right] {C}}
[2,red,edge label={node[midway,right] {B}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red,edge label={node[midway,right] {D}}
[2,red,edge label={node[midway,right] {B}}]
[2,red,edge label={node[midway,right] {C}}]
]
]
[2, edge label={node[midway,left] {B}}
[1,red, edge label={node[midway,left] {A}}
[2,red,edge label={node[midway,right] {C}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red, edge label={node[midway,right] {A}}
[2,red,edge label={node[midway,right] {C}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red,edge label={node[midway,right] {A}}
[2,red,edge label={node[midway,right] {C}}]
[2,red,edge label={node[midway,right] {D}}]
]
]
[2, edge label={node[midway,left] {C}}
[1,red,edge label={node[midway,left] {A}}
[2,red,edge label={node[midway,right] {B}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red,edge label={node[midway,right] {B}}
[2,red,edge label={node[midway,right] {A}}]
[2,red,edge label={node[midway,right] {D}}]
]
[1,red,edge label={node[midway,right] {D}}
[2,red,edge label={node[midway,right] {A}}]
[2,red,edge label={node[midway,right] {B}}]
]
]
[2, edge label={node[midway,left] {D}}
[1,red,edge label={node[midway,left] {A}}
[2,red,edge label={node[midway,right] {B}}]
[2,red,edge label={node[midway,right] {C}}]
]
[1,red,edge label={node[midway,right] {B}}
[2,red,edge label={node[midway,right] {A}}]
[2,red,edge label={node[midway,right] {C}}]
]
[1,red,edge label={node[midway,right] {C}}
[2,red,edge label={node[midway,right] {A}}]
[2,red,edge label={node[midway,right] {B}}]
]
]
]
\end{forest}
\end{document}
A:
I would:
use geometry to get more sensible margins;
move some branches of the tree down to conserve space;
used squared edges to avoid branches crossing things and to reduce crowding and clutter;
avoid putting labels so that edges are drawn through them by repositioning them slightly for greater legibility;
use a style to simplify adding the edge labels, which allows their positions to be amended more easily and determined more consistently (and saves typing) e.g. my label in the example below;
consider adding colour automatically for trees where there is a pattern e.g. all final nodes are a different colour or all left-hand nodes or whatever (but this is just to save typing and clearly a matter of preference);
use pdflscape for larger trees (not needed for this one).
Here's an example:
\documentclass{article}
\usepackage{geometry}
\usepackage[edges]{forest}
\begin{document}
\noindent
\begin{forest}
my label/.style={%
if n=1{%
edge label={node [midway,left] {#1}}
}{%
if n'=1{%
edge label={node [midway,right] {#1}}
}{%
edge label={node [midway,below right] {#1}}
}
},
},
for tree={circle,draw, l sep=20pt},
before typesetting nodes={
where content={}{coordinate}{},
},
forked edges,
[1,red
[2, my label={A}
[1,red,my label={B}
[2,red,my label={C}]
[2,red,my label={D}]
]
[1,red,my label={C}
[2,red,my label={B}]
[2,red,my label={D}]
]
[1,red,my label={D}
[2,red,my label={B}]
[2,red,my label={C}, tier=this]
]
]
[, tier=this, my label={B}
[2
[1,red, my label={A}
[2,red,my label={C}]
[2,red,my label={D}]
]
[1,red, my label={A}
[2,red,my label={C}]
[2,red,my label={D}]
]
[1,red,my label={A}
[2,red,my label={C}]
[2,red,my label={D}]
]
]]
[2, my label={C}
[1,red,my label={A}
[2,red,my label={B}]
[2,red,my label={D}]
]
[1,red,my label={B}
[2,red,my label={A}]
[2,red,my label={D}]
]
[1,red,my label={D}
[2,red,my label={A}]
[2,red,my label={B}, tier=this]
]
]
[, tier=this, my label={D}
[2
[1,red,my label={A}
[2,red,my label={B}]
[2,red,my label={C}]
]
[1,red,my label={B}
[2,red,my label={A}]
[2,red,my label={C}]
]
[1,red,my label={C}
[2,red,my label={A}]
[2,red,my label={B}]
]
]]
]
\end{forest}
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there any good tools to generate a Google Sitemap?
Can you recommend any tools? Should we build our own? Should we create the sitemap manually?
A:
The Google Sitemap Generator for IIS generates a sitemaps based on actual HTTP requests to your server (unlike other sitemap generators that rely on a crawlable path from the homepage, Google's approach doesn't actually crawl your site).
It is uniquely suited to dynamic applications, particularly those that have a deep bank of data that's surfaced through user queries alone.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ng-repeat-start in angular2 - aka repeat multiple elements using NgFor
I need to repeat several li-elements in a list in Angular2 for each item. In angular 1.x I used ng-repeat-start and ng-repeat-end for this. I can't find the right way to do it in Angular 2. There are some older blog posts about this, but their suggestions don't work in the newest beta of Angular2.
All <li>-elements should be repeated for each category:
(which I would normally do with the attribute *ngFor="#category of categories" - but I can't find where to put it...
Help?
<ul class="dropdown-menu" role="menu">
<li class="dropdown-header">
{{ category.title }}
</li>
<li>
<a href="{{ '/music/' + tag.keyword }}" *ngFor="#tag of category.tags" [hidden]="tag.deleted === 1">{{ tag.tag_title_da }}</a>
</li>
<li class="divider"></li>
<li class="dropdown-header">Alle musikstykker</li>
<li><a href="/music/all">Alle musikstykker</a></li>
</ul>
A:
If you want to repeat the contents, use the template tag, and remove the * prefix on ngFor.
According to Victor Savkin on ngFor and templates:
Angular treats template elements in a special way. They are used to
create views, chunks of DOM you can dynamically manipulate. The *
syntax is a shortcut that lets you avoid writing the whole
element.
<ul class="dropdown-menu" role="menu">
<template ngFor #category [ngForOf]="categories">
<li class="dropdown-header">
{{ category.title }}
</li>
<li>
<a href="{{ '/music/' + tag.keyword }}" *ngFor="#tag of category.tags" [hidden]="tag.deleted === 1">{{ tag.tag_title_da }}</a>
</li>
<li class="divider"></li>
<li class="dropdown-header">Alle musikstykker</li>
<li><a href="/music/all">Alle musikstykker</a></li>
</template>
</ul>
Update angular ^2.0.0
You can use ng-container and just change #var to let var.
<ng-container> behaves the same as the <template> but allows to use the more common syntax.
<ul class="dropdown-menu" role="menu">
<ng-container *ngFor="let category of categories">
<li class="dropdown-header">
{{ category.title }}
</li>
<li>
<a href="{{ '/music/' + tag.keyword }}" *ngFor="let tag of category.tags" [hidden]="tag.deleted === 1">{{ tag.tag_title_da }}</a>
</li>
<li class="divider"></li>
<li class="dropdown-header">Alle musikstykker</li>
<li><a href="/music/all">Alle musikstykker</a></li>
</ng-container>
</ul>
A:
In the newer versions it works like this:
<ul class="dropdown-menu" role="menu">
<template ngFor let-category [ngForOf]="categories">
<li class="dropdown-header">
{{ category.title }}
</li>
<li>
<a href="{{ '/music/' + tag.keyword }}" *ngFor="#tag of category.tags" [hidden]="tag.deleted === 1">{{ tag.tag_title_da }}</a>
</li>
<li class="divider"></li>
<li class="dropdown-header">Alle musikstykker</li>
<li><a href="/music/all">Alle musikstykker</a></li>
</template>
</ul>
--> let-category instead of #category
| {
"pile_set_name": "StackExchange"
} |
Q:
error rgding definition & no extension method for System.Web.Routing.RouteValueDictionary
I am going through tutorial at 4GuysFromRolla website regarding Sorting and Paging a Grid of Data in ASP.NET MVC 2 by Scott Mitchell. I am receiving an error CS1061: 'System.Web.Routing.RouteValueDictionary' does not contain a definition for 'AddQueryStringParameters' and no extension method 'AddQueryStringParameters' accepting a first argument of type 'System.Web.Routing.RouteValueDictionary' could be found (are you missing a using directive or an assembly reference?). I am not sure if I need to add a dll reference or something else. Please could someone advise how to solve this thanks in advance. Also I downloaded the demo and there is no problem. error is in PagerLink.ascx file..routeData.AddQueryStringParameters(); // error pointing here
RouteValueDictionaryExtensions.cs looks like this (this is the helper file)...
using System.Web.Routing;
namespace Web
{
public static class RouteValueDictionaryExtensions
{
public static RouteValueDictionary
AddQueryStringParameters(this RouteValueDictionary dict)
{
var querystring = HttpContext.Current.Request.QueryString;
foreach (var key in querystring.AllKeys)
if (!dict.ContainsKey(key))
dict.Add(key, querystring.GetValues(key)[0]);
return dict;
}
public static RouteValueDictionary ExceptFor(this RouteValueDictionary
dict, params string[] keysToRemove)
{
foreach (var key in keysToRemove)
if (dict.ContainsKey(key))
dict.Remove(key);
return dict;
}
}
}
Global.asax.cs looks like this...
enter code here
namespace GridDemosMVC
{
// Note: For instructions on enabling IIS6 or IIS7 classic mode,
// visit http://go.microsoft.com/?LinkId=9394801
public class MvcApplication : System.Web.HttpApplication
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id =
UrlParameter.Optional } // Parameter defaults
);
}
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RegisterRoutes(RouteTable.Routes);
}
}
}
I am also using Dynamic.cs file which is available at microsoft to download.
A:
You need to add a using statement and <%@ Import directive for the namespace with the extension method.
Alternatively, you can move the extension method into your project's namespace.
| {
"pile_set_name": "StackExchange"
} |
Q:
HQL with parameters NoSuchMethodError
I am sure I am overlooking something obvious
the following static query works fine
hqlQuery = "select user from User as user where user.id = 'userid' ";
but when I parametrize the query
hqlQuery = "select user from User as user where user.id = :me ";
Query query = session.createQuery(hqlQuery);
I get a nasty stack dump from building the query.
What am I overlooking?
Exception in thread "main" java.lang.NoSuchMethodError: antlr.collections.AST.getLine()I
at org.hibernate.hql.ast.HqlSqlWalker.generateNamedParameter(HqlSqlWalker.java:940)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.parameter(HqlSqlBaseWalker.java:4997)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.expr(HqlSqlBaseWalker.java:1413)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.exprOrSubquery(HqlSqlBaseWalker.java:4471)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.comparisonExpr(HqlSqlBaseWalker.java:3947)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:2047)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.whereClause(HqlSqlBaseWalker.java:831)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:617)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301)
at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244)
at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254)
at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185)
at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136)
at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101)
at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80)
at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124)
at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156)
at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135)
at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770)
A:
Problem here is there is a conflict between the two ANT jar files(namely: antlr-2.7.6.jar from Hibernate Jar library & antlr-2.7.2 from struts-1.3) in your project. This appears to be a peculiar problem with Struts-1.3 & Hibernate applications.
Please remove antlr-2.7.2.jar from your project(/WEB-INF/lib folder)& it should work fine.. Let me know if it works..
A:
Looks like you mixed incompatible versions of Hibernate jars (probably ANTLR jar has a wrong version).
| {
"pile_set_name": "StackExchange"
} |
Q:
Powered rail on a slope powering glitch?
I have placed a powered rail on a slope with a redstone torch underneath it as shown in the following two pictures.
Now as you can see, the powered rail is in fact unpowered, despite the redstone torch underneath it.
However, if I place a redstone torch next to the powered rail it is powered, but when I remove it, it continues to remain powered, as it should have done in the first place.
Is this a glitch? Or some behaviour of redstone/minecart tracks that I am unaware of?
A:
This is a glitch relating to the updating of blocks.
In short, the issue is that when the powered rail is placed, it does not check to see if the block it is on is already powered, but keeps its state until a nearby block updates, such as might happen when you place a redstone torch that would power it.
When that redstone torch is removed again, the track checks to see if it still powered, and, realizing that it is, stays on. (c.f. the glitch that would give free power to tracks placed on a slope when one was removed from a chain of powered rails).
Anything that causes the track to be updated should make the track powered; such as placing the tracks sequentially from bottom to top, meaning that the initial state of the powered rail will be flat: When the next rail is then placed, pulling the end of the powered track up, it updates and gets power.
Another alternative, as Dan F mentioned, is to simply place the torch after the rail.
| {
"pile_set_name": "StackExchange"
} |
Q:
How and when to use Ember.Application register and inject methods?
I'm trying to understand how to use Ember.Application register & inject methods
What use case are these functions designed for?
How are they to be used and when?
I'd really like to know!
A:
Ember by default does dependency injection when it boots your application using mostly conventions, for example if you use ember-data then an instance of the store class is injected in every route and controller in your application, so you can later get a reference by simply doing this.get('store') inside any route or controller.
For example here is a code extract where the default store get's registered (taken from the source)
Ember.onLoad('Ember.Application', function(Application) {
Application.initializer({
name: "store",
initialize: function(container, application) {
application.register('store:main', application.Store);
...
}
container.lookup('store:main');
}
});
And then injected (source)
Application.initializer({
name: "injectStore",
initialize: function(container, application) {
application.inject('controller', 'store', 'store:main');
application.inject('route', 'store', 'store:main');
application.inject('dataAdapter', 'store', 'store:main');
}
...
});
In other words register and inject are methods to register dependencies and inject them yourself.
Let's assume you have a Session object which you populate after a server request on application start, and which you want to have a reference to in every controller, you could do something like this:
var App = Ember.Application.create({
ready: function(){
this.register('session:current', App.Session, {singleton: true});
this.inject('controller', 'session', 'session:current');
}
});
App.Session = Ember.Object.extend({
sessionHash: ''
});
This code would set the session property of every controller instance to a singleton instance of App.Session, so you could in any controller do this.get('session') and get a reference to it, and since it's defined as a singleton it would be always the same session object.
With register you can register controllers, models, views, or any arbitrary object type. inject, in the other hand, can inject onto all instances of a given class. For example inject('model', 'session', 'session:current') would also inject the session property with the session:current instance into all models. To inject the session object, let's say onto the IndexView you could do inject('view:index', 'session', 'session:current').
Although register and inject are very powerful you should use them wisely and only in the case you really know there is no other way to achieve your goal, I guess the lack of documentation is an indicator for discouragement.
Update - No good explanation without a working example
Since It's mostly a must to provide a working example with an explanation, there it goes: http://jsbin.com/usaluc/6/edit. Notice how in the example we can simply access the mentioned sessionHash by referring to the current controller's session object with {{controller.session.sessionHash}} in every route we are in, this is the merit of what we have done by registering and injecting the App.Session object in every controller in the application.
Hope it helps.
A:
A common use case is to provide the current loggedin user property to controllers and routes as in https://github.com/kelonye/ember-user/blob/master/lib/index.js and https://github.com/kelonye/ember-user/blob/master/test/index.js
| {
"pile_set_name": "StackExchange"
} |
Q:
Struts Javascript AJAX onsuccess page update
I have a problem with updating an HTML site after an AJAX request success.
In my project I'm using old Struts 1 framework with a coolmenus JS component that produces a menu. After a form submit the server returns a block of JS code within <script> tags (among HTML) and these create a menu on page load each time. Now recently I had to implement a solution that is doing an AJAX request for updating my model on server side.
Everything to that point is OK, the model is being updated but the problem starts on swapping received html (using prototypejs):
$$('html')[0].innerHTML = t.responseText;
or
$$('html')[0].innerHTML.update(t.responseText);
It breaks my menu creation(there is no menu after update). I tried to get all 'script' tags from the body and invoke them from evalScripts() function, but it doesn't work at all. I mean the scripts are invoked but it doesn't create the menu.
Any ideas?
A:
I was trying to update the whole page, but I didn't have to. I ended up with extraction of the 'content' part of the page without touching the menu.
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript call function
I have been testing some code lately trying to understand javascript a little bit better. Then I came across the call() function wich I can't get to understand well.
I have the following code:
function hi(){
console.log("hi");
}
var bye = function(param, param2){
console.log(param);
console.log(param2);
console.log("bye");
}
If I call bye.call(hi(), 1, 2), I get hi 1 2 undefined
And if I call bye.cal(1,2), I get 2 undefined bye undefined
for which I understand the call() function first parameter has to be a function, followed by the amount of parameter my bye function accepts. But where is the last undefined coming from?
A:
This first parameter doesn't have to be a function. The first parameter is the object to which the "this" variable is set to in the context of the function call.
var bye = function(param, param2){
console.log(param);
console.log(param2);
console.log("bye");
console.log(this.x)
}
t = {'x': 1};
bye.call(t, 1, 2);
And the console should show: 1, 2, "bye" and 1.
The undefined is the return value of your function.
In your first call:
bye.call(hi(), 1, 2)
You're calling hi() (so it prints 'hi'), the return value is not used, and 1 and 2 are the parameters to bye.
In your second call:
bye.cal(1,2)
1 is assigned to this. 2 is param, and param2 is undefined.
| {
"pile_set_name": "StackExchange"
} |
Q:
Alternative to Publishing on WAS in RAD
I am working on WAS on RAD 7.5 and publishing and making changes as very slow and frustrating..
Is there any other faster alternative like using eclipse and any other server to develop and eventually run it on WAS-RAD system ?
I heard somewhere we can use the dump of mysql and use it something like it but have no idea.
A:
you can try was development profile or liberty profile.
if you don't have enough money :) you can use tomcat and embedded ejb container as alternative.. it would be faster ..but you will need to take care while packaging to tomacat and to websphere
| {
"pile_set_name": "StackExchange"
} |
Q:
json not outputting
I'm running this code in php
while ($row = mysql_fetch_array($result))
{
$arr = array("joke" => $row['joke'], "date" => $row['date'], "rating" => $row['rating']);
echo json_encode($arr);
}
but there's no output. I am running php 5.3.6
A:
nvm I figured it out. the way to do this is to use sql2json
| {
"pile_set_name": "StackExchange"
} |
Q:
Why isn't inline JavaScript code being executed in ASP.NET MVC?
This is my complete View:
@{
ViewBag.Title = "Home";
}
<div style="width:100%; height:100%" id="map"></div>
<script defer="defer" type="text/javascript">
var map = new OpenLayers.Map('map');
var wms = new OpenLayers.Layer.WMS("OpenLayers WMS",
"http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' });
map.addLayer(wms);
map.zoomToMaxExtent();
</script>
But when I run it nothing I can't see the map.
I made a HTML page in Notepad that looks like this:
<html>
<head>
<title>OpenLayers Example</title>
<script src="http://openlayers.org/api/OpenLayers.js"></script>
</head>
<body>
<div style="width:100%; height:100%" id="map"></div>
<script defer="defer" type="text/javascript">
var map = new OpenLayers.Map('map');
var wms = new OpenLayers.Layer.WMS( "OpenLayers WMS",
"http://vmap0.tiles.osgeo.org/wms/vmap0", {layers: 'basic'} );
map.addLayer(wms);
map.zoomToMaxExtent();
</script>
</body>
</html>
And it works.
Why isn't the code being executed in ASP.NET?
I installed OpenLayers from NuGet and if I select OpenLayers and press F12 ('Go To Definition' it opens up OpenLayers.js so it seems to have been downloaded correctly).
EDIT:
The complete generated code:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>FIKA - Home</title>
<script src="http://openlayers.org/api/OpenLayers.js"></script>
<link href="/Content/css?v=bxomq82-FU9mU3eDX6m-kca-a2PFEz0RK2Z7mS-QmnY1" rel="stylesheet"/>
<script src="/bundles/modernizr?v=wBEWDufH_8Md-Pbioxomt90vm6tJN2Pyy9u9zHtWsPo1"></script>
</head>
<body>
<div class="container body-content">
<div style="width:100%; height:100%" id="map"></div>
<script src="http://openlayers.org/api/OpenLayers.js"></script>
<script defer="defer" type="text/javascript">
var map = new OpenLayers.Map('map');
var wms = new OpenLayers.Layer.WMS("OpenLayers WMS",
"http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' });
map.addLayer(wms);
map.zoomToMaxExtent();
</script>
<hr />
</div>
<script src="/bundles/jquery?v=FVs3ACwOLIVInrAl5sdzR2jrCDmVOWFbZMY6g6Q0ulE1"></script>
<script src="/bundles/bootstrap?v=2Fz3B0iizV2NnnamQFrx-NbYJNTFeBJ2GM05SilbtQU1"></script>
<!-- Visual Studio Browser Link -->
<script type="application/json" id="__browserLink_initializationData">
{"appName":"Chrome","requestId":"0ea737fab0f240fab62a7978c5db4fa7"}
</script>
<script type="text/javascript" src="http://localhost:60314/4457514eae394a96a55c4c6c386b7942/browserLink" async="async"></script>
<!-- End Browser Link -->
</body>
</html>
A:
Your issue occurs because in the complete generated code you have the HTML5 doctype where in your working demo you do not have a doctype. That difference occurs for the browser to render differently the height:100%; property.
You need to set the height in pixels before executing your code.
function setMapHeight() {
var w = window,
d = document,
e = d.documentElement,
g = d.getElementsByTagName('body')[0];
document.getElementById('map').style.height = (w.innerHeight || e.clientHeight || g.clientHeight) + 'px';
}
setMapHeight();
window.onresize = setMapHeight; // Add this code to fix height if window resizes
var map = new OpenLayers.Map('map');
var wms = new OpenLayers.Layer.WMS("OpenLayers WMS",
"http://vmap0.tiles.osgeo.org/wms/vmap0", { layers: 'basic' });
map.addLayer(wms);
map.zoomToMaxExtent();
| {
"pile_set_name": "StackExchange"
} |
Q:
Disable native Soap class in PHP5 and use nuSoap?
I've spent the last week developing code to connect to a Web Service using the nuSoap library. I just deployed the code to production, but immediately started getting error's that I hadn't seen before. I traced the problem back to a line of code that is trying to instantiate a new soapclient object. It turns out that both libraries have a class named 'soapclient' and the one that's being created in production is from the native Soap library, not the nuSoap library that I'm including. How can I disable the native Soap functionality and stick strictly to nuSoap?
A:
With the release of PHP5 there is a soapclient class included in the php_soap extension. NuSOAP has renamed its class to nusoap_client. If your copy of NuSOAP is current you should be able to use that. This doesn't disable the php_soap extension, but should allow you to use the NuSOAP class without further conflict.
| {
"pile_set_name": "StackExchange"
} |
Q:
How high (height-wise) should the oil be for frying chicken?
I thought the point of fried chicken is to have enough oil to deep fry it, but I've seen a lot of recipe discussing to fry the chicken for x-time, then flip over and fry for y-time.
Does this mean for recipes that involve flipping chicken in fryer we don't want the oil too high (height-wise), or does it make a difference even when completely covered in oil to cook on each side.
A:
Deep fry and shallow fry both work. At home, when using oil in a wok (safest way because of the sloping sides), I flip whether the oil is deep or shallow. This is just to ensure even browning. For shallow, I would use an amount of oil that is at least half the thickness of the chicken.
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS corner ribbon without rotate
There many tutorials to make corner ribbon, and of all tutorial using transform/rotate 45 deg. It makes content inside div (font) also rotate. I don't want it. I want to make like below picture, font/symbol still stand-up.
I try to make a triangle background, but I can't make like what I want.
A:
@Dedi Ananto : Please take note of the following code:
<div class="arrow-right"></div>
.arrow-right {
width: 0px;
height: 0px;
border-top: 0px solid transparent;
border-bottom: 70px solid transparent;
border-left: 60px solid red;
}
Hope this Helps..
Regards,
Karan
| {
"pile_set_name": "StackExchange"
} |
Q:
What approach should I use to do client side filtering?
I am making the front end of a asp.net mvc3 web application. A controller action sends a database driven list to a view model which then populates a series of divs. I have a filtering section above the div list. I am not sure which approach to take to implement the filter. I have considered rolling my own (I always keep this option on the table), using jQuery's .filter(), or finding some JavaScript functionality to use.
What is the standard way to filter client side with JavaScript (or a js derived library)?
EDIT
For gdoron's lack of context:
js
var gdoronArray = [];
for(var i = 0; i < 10000; i++){
gdoronArray.push("text" + i + " " + (i*10));
}
Is there a standard library to pull only the items in gdoronArray which contain "ext5" or is this just a roll your own situation?
A:
gdoronArray.filter( function(v){
return !!~v.indexOf("ext5");
});
https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/filter
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I generate all possible IPs from a list of ip ranges in Python?
Let's say I have a text file contains a bunch of ip ranges like this:
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x is start value and y.y.y.y is end value of range.
How can I convert these ip ranges to all possible IPs in a new text file in python?
PS: This question is not same as any of my previous questions. I asked "how to generate all possible ips from cidr notations" in my previous question. But in here I ask "how to generate from ip range list". These are different things.
A:
This function returns all ip addresses like from start to end:
def ips(start, end):
import socket, struct
start = struct.unpack('>I', socket.inet_aton(start))[0]
end = struct.unpack('>I', socket.inet_aton(end))[0]
return [socket.inet_ntoa(struct.pack('>I', i)) for i in range(start, end)]
These are the building blocks to build it on your own:
>>> import socket, struct
>>> ip = '0.0.0.5'
>>> i = struct.unpack('>I', socket.inet_aton(ip))[0]
>>> i
5
>>> i += 1
>>> socket.inet_ntoa(struct.pack('>I', i))
'0.0.0.6'
Example:
ips('1.2.3.4', '1.2.4.5')
['1.2.3.4', '1.2.3.5', '1.2.3.6', '1.2.3.7', ..., '1.2.3.253', '1.2.3.254', '1.2.3.255', '1.2.4.0', '1.2.4.1', '1.2.4.2', '1.2.4.3', '1.2.4.4']
Read from file
In your case you can read from a file like this:
with open('file') as f:
for line in f:
start, end = line.strip().split('-')
# ....
A:
Python 3 only, for IPv4, same idea with @User but use new Python3 standard library: ipaddress
IPv4 is represented by 4 bytes. So next IP is actually next number, a range of IPs can be represented as a range of integer numbers.
0.0.0.1 is 1
0.0.0.2 is 2
...
0.0.0.255 is 255
0.0.1.0 is 256
0.0.1.1 is 257
By code (ignore the In []: and Out []:)
In [68]: from ipaddress import ip_address
In [69]: ip_address('0.0.0.1')
Out[69]: IPv4Address('0.0.0.1')
In [70]: ip_address('0.0.0.1').packed
Out[70]: b'\x00\x00\x00\x01'
In [71]: int(ip_address('0.0.0.1').packed.hex(), 16)
Out[71]: 1
In [72]: int(ip_address('0.0.1.0').packed.hex(), 16)
Out[72]: 256
In [73]: int(ip_address('0.0.1.1').packed.hex(), 16)
Out[73]: 257
ip.packed.hex() returns the hexadecimal form of 4 bytes, as it is in
hexadecimal, it is shorter (e.g: 0xff hex == 255 decimal == 0b11111111 binary),
and thus, often used for representing bytes. int(hex, 16) returns integer value
corresponding to the hex value as it is more human friendly, and can be used as input for ip_address.
from ipaddress import ip_address
def ips(start, end):
'''Return IPs in IPv4 range, inclusive.'''
start_int = int(ip_address(start).packed.hex(), 16)
end_int = int(ip_address(end).packed.hex(), 16)
return [ip_address(ip).exploded for ip in range(start_int, end_int)]
ips('192.168.1.240', '192.168.2.5')
Returns:
['192.168.1.240',
'192.168.1.241',
'192.168.1.242',
'192.168.1.243',
'192.168.1.244',
'192.168.1.245',
'192.168.1.246',
'192.168.1.247',
'192.168.1.248',
'192.168.1.249',
'192.168.1.250',
'192.168.1.251',
'192.168.1.252',
'192.168.1.253',
'192.168.1.254',
'192.168.1.255',
'192.168.2.0',
'192.168.2.1',
'192.168.2.2',
'192.168.2.3',
'192.168.2.4']
| {
"pile_set_name": "StackExchange"
} |
Q:
jsf dynamic component that restores state
I am trying to display HtmlInputText dynamically in a JSF page. However, I am getting
javax.faces.FacesException: Cannot add the same component twice: j_idt10:hitDyn
During the first request to the page the input text renders well. That exception happens during postback of the page, when I enter some text in the input component and press Enter.
In the .xhtml page, I have the following code:
<h:form>
<h:outputLabel value="Welcome!"></h:outputLabel>
<f:metadata>
<f:event type="preRenderView" listener="#{dynamicBacking.addDynComp}" />
</f:metadata>
<h:panelGroup id="dynOuter"></h:panelGroup>
</h:form>
In the backing bean, I have the following code:
@ManagedBean(name="dynamicBacking")
public class DynamicBacking {
public void addDynComp() {
Application app = FacesContext.getCurrentInstance().getApplication();
HtmlInputText hit = (HtmlInputText)app.createComponent(HtmlInputText.COMPONENT_TYPE);
hit.setId("hitDyn");
UIComponent parent = findComponent("dynOuter");
if( parent != null ) {
parent.getChildren().add(hit);
}
}
public UIComponent findComponent(final String id) {
FacesContext context = FacesContext.getCurrentInstance();
UIViewRoot root = context.getViewRoot();
final UIComponent[] found = new UIComponent[1];
root.visitTree(new FullVisitContext(context), new VisitCallback() {
@Override
public VisitResult visit(VisitContext context, UIComponent component) {
if(component.getId().equals(id)){
found[0] = component;
return VisitResult.COMPLETE;
}
return VisitResult.ACCEPT;
}
});
return found[0];
}
}
I guess that there is some problem with restoring the state of the dynamic component in a postback. Am I adding the dynamic component too late in the lifecycle of the JSF page? I know that in ASP.NET I could add a dynamic control during Page.Load phase. But I can't so far figure out how to achieve the same in JSF. Please, help!
A:
The exception appears because the component is added in the tree on the initial page load. When performing a postback your listener gets called again and it tries to add another component with the same id and this causes the exception. A solution of the issue is to check if the request is NOT a postback when adding the component. The following code shows how to check for postback:
if (FacesContext.getCurrentInstance().isPostback()) {....
| {
"pile_set_name": "StackExchange"
} |
Q:
Pandas Data Frame not Appending
I am trying to append dataframes via for loop.
CODE
def redshift_to_pandas(sql_query,**kwargs):
# pass a sql query and return a pandas dataframe
cur.execute(sql_query)
columns_list = [desc[0] for desc in cur.description]
data = pd.DataFrame(cur.fetchall(),columns=columns_list)
return data
Input -
all_schema = [('backup')]
Loop -
try:
if len(all_schema) == 0:
raise inputError("The Input has no schema selected. EXITING")
else:
modified_schemadf=pd.DataFrame(columns=['columns_name','status'])
for i in range(len(all_schema)):
#print (redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
print (modified_schemadf)
except inputError as e:
print(e.message)
logger.error("UNEXPECTED INPUT FOUND, Please check the I/P List . EXITING")
print (modified_schemadf)
I feel the issue is very obvious but i dont seem to find the issue.
Here is the o/p -
So the the first print ( commented out ), does return me the correct result.
the next steps i.e appending the result to the declared dataframe ( name - modified_schemadf) is the problem area. When i print its value , it still throws a empty dataframe. For some reason the appending isnt happening.
When the code enters else , i.e when the input is legit, there will be empty dataframe created called modified_schemadf. To this empty dataframe, there will be as many number of appends as there are inputs.
Thanks in Advance.
Please dont mind the indentations, copying might have affected them.
A:
Isn't the issue just that you don't assign the appended dataframe? Try changing this line
modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
to this line
modified_schemadf = modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
| {
"pile_set_name": "StackExchange"
} |
Q:
pandas - binning data and getting 2 columns
I have a very simple dataframe. There are 2 columns, day_created (int, could change to datetime) and suspended (int, could change to boolean). I can change the data if it makes it easier to work with.
Day created Suspended
0 12 0
1 6 1
2 24 0
3 8 0
4 100 1
5 30 0
6 1 1
7 6 0
The day_created column is the integer of the day the account was created (from a start date), starting at 1 and increasing. The suspended column is a 1 for suspension and a 0 for no suspension.
What I would like to do is bin these accounts into groups of 30 days or months, but from each bin get a total number of accounts for that month and the number of accounts suspended that were created in that month. I then plan on creating a bar graph with 2 bars for each month.
How should I go about this? I don't use pandas often. I assume I need to do some tricks with resample and count.
A:
Use
df.index = start_date + pd.to_timedelta(df['Day created'], unit='D')
to give the DataFrame an index of Timestamps representing when the accounts were created.
Then you can use
result = df.groupby(pd.TimeGrouper(freq='M')).agg(['count', 'sum'])
to group the rows of the DataFrame (by months) according to the Timestamps in the index.
.agg(['count', 'sum']) computes the number of accounts (the count) and the number of suspended accounts for each group.
Then result.plot(kind='bar', ax=ax) plots the bar graph:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(
{'Day created': [12, 6, 24, 8, 100, 30, 1, 6],
'Suspended': [0, 1, 0, 0, 1, 0, 1, 0]})
start_date = pd.Timestamp('2016-01-01')
df.index = start_date + pd.to_timedelta(df['Day created'], unit='D')
result = df.groupby(pd.TimeGrouper(freq='M'))['Suspended'].agg(['count', 'sum'])
result = result.rename(columns={'sum':'suspended'})
fig, ax = plt.subplots()
result.plot(kind='bar', ax=ax)
locs, labels = plt.xticks()
plt.xticks(locs, result.index.strftime('%Y-%m-%d'))
fig.autofmt_xdate()
plt.show()
yields
| {
"pile_set_name": "StackExchange"
} |
Q:
Display difference between volatile and usual variable in Java
I am trying to create an example to display the difference between volatile and usual variables like:
package main;
public class TestVolatile extends Thread {
public int l = 5;
public volatile int m = -1;
public TestVolatile(String str) {
super(str);
}
public void run() {
int i = 0;
while ((l > 1) && (l < 10)) {
if (m >= 0) {
m++;
}
i++;
l = 5;
System.out.println("5=" + i + " m=" + m);
}
}
public static void main(String[] args) throws InterruptedException {
TestVolatile tva = new TestVolatile("ThreadA");
tva.start();
sleep(5);
synchronized (tva) {
tva.m = 5;
tva.l = 10;
}
}
}
So m is volatile, l is not. I suppose that exiting from the while loop depends on the value of l.
Because the value of l is not volatile - m will be incremented at least 1 time after l has been assigned 5. But I have run the code 10 times and always m==5.
So I suppose that I am wrong. How to fix this problem? Thank you.
Thanks for answers, but not all run well.
I set like:
volatile int x = 0;
volatile int y = 0;
So now the variables have to be the same! But that is not the case.
x: 346946234 y: 346946250
x: 346946418 y: 346946422
x: 346946579 y: 346946582
x: 346946742 y: 346946745
x: 346946911 y: 346946912
A:
You are synchronizing the main thread and your test thread. Therefore Java guarantees to make any changes visible performed by the other thread.
Btw, it is impossible to construct an example which deterministically shows a difference between volatile and non-volatile. The best you can hope is to get a program which shows the difference with a quite high probability. If the threads run interleaved on the same core. You won't be able to show any difference at all.
The following program shows on my computer the difference between volatile and non-volatile variables.
public class ShowVolatile {
final static int NUM_THREADS = 1;
int x = 0;
volatile int y = 0;
public static void main(String... args) {
final ShowVolatile sv = new ShowVolatile();
for (int i=0; i< NUM_THREADS; i++) {
new Thread(new Runnable() {
public void run() {
while (true) {
sv.x += 1;
sv.y += 1;
}
}
}).start();
}
while (true) {
System.out.println("x: " + sv.x + " y: " + sv.y);
}
}
}
If you increase the number of threads you will see additional synchronization misses. But a thread count of 1 is enough. At least on my hardware a Quad-Core i7.
| {
"pile_set_name": "StackExchange"
} |
Q:
systemd service script for libreoffice/openoffice
I'm trying to setup correctly a headless libreoffice/openoffice server on a debian jessie. I created a script named /etc/systemd/system/openoffice.service with the following content
[Unit]
Description=OpenOffice service
After=syslog.target
[Service]
ExecStart=/usr/bin/soffice '--accept=socket,host=localhost,port=8101;urp;StarOffice.ServiceManager' --headless --nofirststartwizard --nologo
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
User=www-data
[Install]
WantedBy=multi-user.target
And I enabled it via:
systemctl enable openoffice.service
I'm in a situation that is only partially working:
it correctly starts on boot
if queried status systemctl status openoffice.service it clams it is still activating
If I try to start it it just hangs
I haven't been able to find a working example, I'd also like to understand how to create the debian /etc/init.d script that uses systems...
A:
You set Type=notify in your service. This is meant to be used only for specific services which are designed to notify systemd when they have finished starting up. At the moment, these are rather uncommon, and I don't think LibreOffice is among them.
You should most likely be using Type=simple instead.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do you equally space out elements in a Row?
Row{
width: parent.width
spacing: ????
Checkbox{}
Checkbox{}
Checkbox{}
Checkbox{}
}
So just to be clear, the checkboxes should be spaced in such a manner that however wide the row is, it will expand or compress the spacing in accordance to this.
A:
The simplest solution would be to set width: parent.width/4 for each of the checkboxes. If you want to keep the checkbox width set at some known value, you could instead set spacing: (parent.width - 4 * checkboxwidth)/3 on the Row. Note that this will cause the elements to overlap when the parent is narrow.
If you're targeting Qt 5.1 or higher, you may want a RowLayout. I'm still on 5.0, though, so I can't help you there.
Yet another way to do this would to be to put each CheckBox in an Item. Each Item would have width: parent.width/4, and each CheckBox would have anchors.centerIn: parent. This would give a half-width margin on the far left and far right, which may or may not be desired.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is a JavaScript reserved keyword allowed as a variable name?
We know that let is a reserved keyword that defines a variable in JavaScript.
var let = 2;
console.log(let); // return 2
So why is this not an error?
A:
let is only a reserved word in strict mode:
'use strict';
var let = 5;
Uncaught SyntaxError: Unexpected strict mode reserved word
This is because browsers generally prioritize backwards compatibility above all else. Although let was introduced in ES2015 (and its use was forseen sometime before then), prior scripts which used let as a variable name would continue to work as desired. For example, if your script was written in 2008:
var let = 2;
console.log(let);
Then it would continue to work in 2020 as well.
For very similar reasons, async and await are also permitted as variable names.
As for why the use of let errors in strict mode - strict mode was introduced in ES5, in 2009. Back then, the language designers saw that the use of new keyword(s) to declare variables was a possibility in the future, but it wasn't set in stone yet, and ES6 was still a long ways off. Once ES5 came out, script writers could opt-in to strict mode to make code less confusing, and change silent errors to explicit errors. Although let wasn't usable for variable declaration yet, prohibiting it as a variable name in strict mode improved the readability of future scripts which opted into strict mode, while also not breaking any existing scripts.
A:
let and some of the other works acts as reserved words only in strict mode. The specs says
Disallowed in strict mode: Those that are contextually disallowed as identifiers, in strict mode code: let, static, implements, interface, package, private, protected, and public;
You can see let inside the list of words which are only disallowed in strict mode. If you want to throw error for using let as variable name you can use strict mode
"use strict";
var let = 3
| {
"pile_set_name": "StackExchange"
} |
Q:
How to have simple google apps script send mails from Sheets from owner account regardless of who's accessing file
I've clicked around for the past few days trying to find an answer but can't seem to find one that makes sense to me (forgive me, I'm fairly new to GAS). I am trying to set up a Fantasy Golf Draft sheet to be used by about 12 users, but over half of which don't have/aren't willing to use a Gmail address. Getting access to the file is no problem, where I am running into an issue is trying to run a script, where when a Button/Shape is clicked, it sends an automated email to the next person who's turn it is to pick. The functionality of the script is working, when it comes from myself or someone with a Google account who can authorize the script etc. I run into troubles when it's someone without a Google account.
My question - how can I set the script to ONLY send from my email, or the Sheet/Script Owner's email - regardless of who is modifying/clicking the button? I see links about creating the script as a webapp to do this, but I get lost quickly.
Here's a link to my sheet:
[https://docs.google.com/spreadsheets/d/16AppcmrcuhatnzcEs7eIQyD_p1swbRimRZZ4FdbhBKI/edit?usp=sharing][1]
And here is my send mail code:
function sendAlertEmails() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("Send Mails"));
var sheet = SpreadsheetApp.getActiveSheet();
var dataRange = sheet.getRange("A2:f2");
var data = dataRange.getValues();
for (i in data) {
var rowData = data[i];
var emailAddress = rowData[1];
var recipient = rowData[0];
var message1 = rowData[2];
var message2 = rowData[3];
var message3 = rowData[4];
var message4 = rowData[5];
var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4;
var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK';
MailApp.sendEmail(emailAddress, subject, message);
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("DRAFT"));
}
}
Any help would be greatly appreciated!
A:
I felt interested in this issue and worked a bit more on it. I changed from it being a get request to being a post request.
Here is what I have in the Google sheet.
function sendAlertEmails() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("Send Mails"));
var sheet = SpreadsheetApp.getActiveSheet();
var dataRange = sheet.getRange("A2:f2");
var data = dataRange.getValues();
for (i in data) {
var rowData = data[i];
var emailAddress = rowData[1];
var recipient = rowData[0];
var message1 = rowData[2];
var message2 = rowData[3];
var message3 = rowData[4];
var message4 = rowData[5];
var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4;
var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK';
var data = {
'name': 'Bob Smith',
'email': 'a@b.com',
'message': message,
'subject': subject,
};
var options = {
'method' : 'post',
'contentType': 'application/json',
'payload' : data
};
var secondScriptID = 'STANDALONE_SCRIPT_ID'
var response = UrlFetchApp.fetch("https://script.google.com/macros/s/" + secondScriptID + "/exec", options);
Logger.log(response) // Expected to see sent data sent back
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("DRAFT"));
// Browser.msgbox("Your Pick Has Been Made");
}
}
Below is what I have in the standalone script. There are some provisos on the standalone script working:
It needs to be published under "Deploy as a webapp"
Access should be set to 'Anyone, even anonymous'
Every time you make a change to the standalone script publish again and
change the Project version to new. This is so the call from the first sheet calls to the latest code.
Standalone Script
function convertURItoObject(url){
url = url.replace(/\+/g,' ')
url = decodeURIComponent(url)
var parts = url.split("&");
var paramsObj = {};
parts.forEach(function(item){
var keyAndValue = item.split("=");
paramsObj[keyAndValue[0]] = keyAndValue[1]
})
return paramsObj; // here's your object
}
function doPost(e) {
var data = e.postData.contents;
data = convertURItoObject(data)
var recipient = data.email;
var body = data.message;
var subject = data.subject;
try {
MailApp.sendEmail(recipient, subject, body)
}
catch(e){
Logger.log(e)
}
return ContentService.createTextOutput(JSON.stringify(e));
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Can polyalloy (plastic) pex fittings be used with both styles of attachment rings?
Some pex fittings are made of a type of durable plastic known as polyalloy. Examples:
These fittings appear to be plastic equivalents of their brass counterparts.
In general is it permissible to use either the copper crimp rings OR the stainless steel cinch / pinch clamps with this type of fitting?
Notes:
This wasn't addressed in What is the advantage of PEX pinch clamp vs. crimp rings?
A:
TLDR: Yes, either type of attachment ring can be used. At least in the USA, these products have to conform to standards which make this so. Additionally, some manufacturers specifically state this is the case.
Details:
Primarily, this seems to come down to manufacturing standards.
In the USA, "PolyAlloy" fittings are governed by standard ASTM F2159 Standard Specification for Plastic Insert Fittings Utilizing a Copper Crimp Ring... which states:
This specification establishes requirements for sulfone plastic
insert fittings utilizing a copper crimp ring for [PEX] tubing...
Based on that alone, it would seem that these plastic fittings can only accept the copper crimp rings, not the Oetiker-style stainless steel cinch rings.
However, cinch rings are governed by ASTM F2098 Standard Specification for Stainless Steel Clamps for Securing [PEX] Tubing to Metal Insert and Plastic Insert Fittings which states
This specification covers stainless steel clamps ... that comply with F1807 or F2159, and
cross-linked polyethylene (PEX) plastic tubing ...
Therefore by reference to F2159 it seems that the cinch clamps are effectively retconned into acceptability for use with polyalloy fittings.
For example, Everflow fittings are documented to be "Certified to ASTM F2159". And here's a marking on a Vanguard Apollo package indicating the same:
Update: I contacted Apollo about this and they wrote:
You may use pinch rings, copper crimp rings, pro crimp rings, and
stainless steel sleeves with any of our Poly Alloy fittings.
So at least as far as their products go, there should be no problem. And I think therefore that any of these fittings made to the same standard ought to be fine also.
| {
"pile_set_name": "StackExchange"
} |
Q:
XML parsers used in iphone sdk
I am quite new to iphone development. I was going through tutorials on XML parsing for which NSXMLParser is used. Are there other parsers we can use for parsing XML. How do we decide which parser to use?
Regards,
Stone
A:
Standard parsers are NSXMLParser or c-based libxml. But there is plenty of 3rd party parsers available. Check this blog post where some of the most popular parsers reviewed and compared.
| {
"pile_set_name": "StackExchange"
} |
Q:
hook-length formula: "Fibonaccized": Part II
This is a natural follow-up to my previous MO question, which I share with Brian Hopkins.
Consider the Young diagram of a partition $\lambda = (\lambda_1,\ldots,\lambda_k)$. For a square $(i,j) \in \lambda$, define the hook numbers $h_{(i,j)} = \lambda_i + \lambda_j' -i - j +1$ where $\lambda'$ is the conjugate of $\lambda$.
The hook-length formula shows that if $\lambda\vdash n$ then
$$n!\prod_{\square\,\in\,\lambda}\frac1{h_{\square}}$$
counts standard Young tableaux whose shape is the Young diagram of $\lambda$.
Recall the Fibonacci numbers $F(0)=0, \, F(1)=1$ with $F(n)=F(n-1)+F(n-2)$. Define $[0]!_F=1$ and $[n]!_F=F(1)\cdot F(2)\cdots F(n)$ for $n\geq1$.
QUESTION. What do these integers count?
$$[n]!_F\prod_{\square\,\in\,\lambda}\frac1{F(h_{\square})}.$$
A:
This is my answer to the original question (https://mathoverflow.net/a/327022/50244) whether these numbers are integers to begin with, it gives some combinatorial meaning as well:
Use the formulas
$F(n) = \frac{\varphi^n -\psi^n}{\sqrt{5}}$, $\varphi =\frac{1+\sqrt{5}}{2}, \psi = \frac{1-\sqrt{5}}{2}$. Let $q=\frac{\psi}{\varphi} = \frac{\sqrt{5}-3}{2}$, so that
$F(n) = \frac{\varphi^n}{\sqrt{5}} (1-q^n)$
Then the Fibonacci hook-length formula becomes:
\begin{align*}
f^{\lambda}_F:= \frac{[n]!_F}{\prod_{u\in \lambda}F(h(u))} = \frac{ \varphi^{ \binom{n+1}{2} } [n]!_q }{ \varphi^{\sum_{u \in \lambda} h(u)} \prod_{u \in \lambda} (1-q^{h(u)})}
\end{align*}
So we have an ordinary $q$-analogue of the hook-length formula. Note that
$$\sum_{u \in \lambda} h(u) = \sum_{i} \binom{\lambda_i}{2} + \binom{\lambda'_j}{2} + |\lambda| = b(\lambda) +b(\lambda') +n$$
Using the $q-$analogue hook-length formula via major index (EC2, Chapter 21) we have
\begin{align*}
f^\lambda_F = \varphi^{ \binom{n}{2} -b(\lambda)-b(\lambda')} q^{-b(\lambda)} \sum_{T\in SYT(\lambda)} q^{maj(T)} = (-q)^{\frac12( -\binom{n}{2} +b(\lambda') -b(\lambda))}\sum_T q^{maj(T)}
\end{align*}
Now, it is clear from the q-HLF formula that $q^{maj(T)}$ is a symmetric polynomial, with lowest degree term $b(\lambda)$ and maximal degree $b(\lambda) + \binom{n+1}{2} - n -b(\lambda) -b(\lambda') =\binom{n}{2} - b(\lambda')$ so the median degree term is
$$M=\frac12 \left(b(\lambda) +\binom{n}{2} - b(\lambda')\right)$$
which cancels with the factor of $q$ in $f^{\lambda}_F$, so the resulting polynomial is of the form
\begin{align*}
f^{\lambda}_F = (-1)^{M} \sum_{T: maj(T) \leq M } (q^{M-maj(T)} + q^{maj(T)-M}) \\
= (-1)^{M} \sum_{T} (-1)^{M-maj(T)}( \varphi^{2(M-maj(T))} + \psi^{2(M-maj(T)}) =
\sum_T (-1)^{maj(T)} L(2(M-maj(T)))
\end{align*}
where $L$ are the Lucas numbers.
Remark. This is a byproduct of collaboration with A. Morales and I. Pak.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL get unique month year combos
SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2
FROM tblStatSessions
WHERE (projectID = 187)
GROUP BY sessionStart
This returns:
11 | 2010
11 | 2010
11 | 2010
12 | 2010
12 | 2010
But I need it to only return each instance once, IE:
11 | 2010
12 | 2010
If that makes sense!
A:
The following should be what you want:
SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2
FROM tblStatSessions
WHERE (projectID = 187)
GROUP BY MONTH(sessionStart), YEAR(sessionStart)
in general you need to group by every non-aggregate column that you are selecting. Some DBMSs, such as Oracle, enforce this, i.e. not doing so results in an error rather than 'strange' query execution.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where can I get a proper hot chocolate in Firenze-Venezia-Trieste?
I am right now in Firenze but will spend two days in Venice and two days in Trieste and I'd like to drink a proper, thick, tasty hot chocolate but everyone says they don't make it in the summer. Any ideas? In Firenze I have a one week bus pass so I'm not limited to any area.
A:
Hot Chocolate and the Italian Summer
As many, many waiters must have told you, hot chocolate is not exactly a summer drink. I do understand that those same establishments probably serve hot coffee and tea in the summer, however tea is somewhat of a more multi-season drink whereas coffee is a daily drink for most Italians.
In my opinion, if you wish to maximise the likelihood of finding hot chocolate, you should target specialised establishments, or Cioccolaterie (literally chocolate-places in Italian). Your search keywords should be something like cioccolata calda XXX or cioccolateria XXX, where XXX is the city you wish to search in.
Hot Chocolate in Firenze
Searching around on the internet for cioccolata calda Firenze yields many results (see here and here for two sample reviews in Italian). The consensus however seems to point towards Rivoire which is known for making their own chocolate, as well as serving thick hot chocolate beverages. Another option could be Cioccolateria Hemingway. None of these specify if they serve hot chocolate in the summer. Nevertheless it might be worth trying them since, being Cioccolaterie, they are definitely more likely to have hot chocolate on their menus.
A:
There are quite some places to try in Trieste.
I suggest to try Chocolat first. From there you can walk towards Piazza Unità and check a local Torrefazione (they serve tea and coffee also). Behind Piazza Unità there's Gelato Marco, a gelateria (ice cream place) where they serve ice-cream covered in hot chocolate!
After visiting the old city centre, walk in viale XX settembre to try Madison. This is a rather long pedestrian area filled with restaurants, bars, gelaterie. At late afternoon it gets crowded for aperitivo.
Everything I linked is in walking distance.
A:
I just learned of the existence of VizioVIrtù in Venice. Look at this:
And the text suggests it's served in every season:
Each season has its chocolate drink. True, as this drink is exquisite also if served cold. Is your mouth watering? Try the milkless and sugarless one.
| {
"pile_set_name": "StackExchange"
} |
Q:
GetType returns diffent information than is operator uses
Cannot explain what is going on the following program. GetType is returning the type I want to return and not the original one. Does that mean we cannot rely on GetType? is operator is right though. Can anybody please explain it in detail?
using System;
namespace ConsoleApplication2
{
public class MyClass
{
public Type GetType()
{
return typeof(Program);
}
}
class Program
{
static void Main(string[] args)
{
MyClass mc = new MyClass();
if (mc.GetType() == typeof(Program))
{
Console.WriteLine("Confused.");
}
if(mc is Program)
{
Console.WriteLine(mc.GetType()); // Don't get inside the if. Why?
}
}
}
}
Update: I am reading the book CLR via C# 3rd edition. In chapter 4 (2nd page) when it explains different methods in System.Object it says
"The GetType method is nonvirtual,
which prevents a class overriding this
method and lying about its type"
While I agree about the first statement, I am lying about MyClass type. ain't I?
A:
Please take care of the warning, as they do exists for reasons. Your code compiled with the following warning:
Warning 1 'ConsoleApplication2.MyClass.GetType()' hides inherited member 'object.GetType()'. Use the new keyword if hiding was intended.
which means GetType() is non-virtual and you are writing new unrelated method of GetType() that CLR will never call it.
A:
is operator implemented in terms of as operator and finally use isinst IL instruction. And of course this instruction don't know about your not virtual GetType method that you define in some class in your inheritance hierarchy.
To understand this "confusing" behavior lets "implement" our own version of the "is operator":
public class MyClass
{
public Type GetType()
{
return typeof(Program);
}
}
class Program {
//this is oversimplified implementation,
//but I want to show the main differences
public static bool IsInstOf(object o, Type t)
{
//calling GetType on System.Object
return o.GetType().IsAssignableFrom(t);
}
static void Main(string[] args)
{
MyClass mc = new MyClass();
//calling MyClass non-virtual version for GetType method
if (mc.GetType() == typeof(Program))
{
//Yep, this condition is true
Console.WriteLine("Not surprised!");
}
//Calling System.Object non-virtual version for GetType method
if (IsInstOf(mc, typeof(Program)))
{
//Nope, this condition isn't met!
//because mc.GetType() != ((object)mc).GetType()!
}
Console.ReadLine();
}
}
A:
Object.GetType is not a virtual method. So mc is MyClass and effectively calls Object.GetType and not your method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Solving system of linear equations (to determine a boundry)
I'm puzzeled how to programmatically (in R) solve the following linear system:
Given $\mathbf{R} \in \mathbb{R}^{n \times n}$, $\mathbf{R}^{-1}$, and a constant $c$ what is the solution to $\mathbf{u} \in \mathbb{R}^n$ with $\mathbf{u}^T = (u_1, \ldots, u_n)$ for
$\mathbf{u}^T\mathbf{R}^{-1}\mathbf{u} = c$
Lets take the simple case for $n = 2$. Fixing a component, say $u_1$, the solution to $u_2$ can be found by explicitly writing down $u_1(u_1r_{11} + u_2r_{21}) + u_2(u_1r_{12}+u_2r_{22}) - c = 0$ and solving for $u_2$ using quadratic formula. But for more dimensions there must be a better way.
I guess I need to bring it into the form $\mathbf{Ax = b}$, in order to use solve. But I havent figured out yet how exactly.
Right now I'm stuck at the following: let
$\mathbf{U} = \left(\begin{matrix}
u_1 & \ldots & 0\\
\ldots & \ldots & \ldots\\
0 & \ldots & 1
\end{matrix}\right)$ and $\mathbf{v}^T = (1, 1, \ldots, u_n)$ with $\mathbf{Uv=u}$ then i would have the fixed terms separated from the variable one ($u_n$) for which I need to determine the value. I can put it into the equation above, but how to proceed? Is this the right way ?
The background is the answer I posted in How to draw confidence areas. I would like to explecitly compute the "exact" threshold boundry. I understand that I need to solve this linear system but I cannot get it quite right yet. I'm unsatisfied with the two possible solutions: 1. using Quadratic formula to hard code the solution and 2. using optimize routine. The first one would only work for 2 dimensions and the second one would be unreliable (because upto two different solutions are possible for every x).
Furthermore, I think there should be a concise solution.
edit (12.03) Thank you for the response. I played with the solution but still have some question.
So as far as I understood, compute_scale would compute my decision boundry. Since I have two possibilities for $\gamma$, i.e. positive and negative, I can compute the critical values. However, if I plot them, I only get the half truth. I tinkered, but havent figured out how to compute the complete boundry. Any advice?
compute_stat <- function(v, rmat) {
transv <- qnorm(v)
return(as.numeric(transv %*% rmat %*% transv))
}
compute_scale <- function(v, rmat) {
gammavar <- sqrt(threshold / (v %*% rmat %*% v))
return(c(pos = pnorm(v * gammavar), neg = pnorm(v * (-gammavar))))
}
Rg <- matrix(c(1, .1, .2, 1), ncol = 2)#matrix(c(1,.01,.99,1), ncol = 2)
Rginv <- MASS::ginv(Rg)
gridval <- seq(10^-2, 1 - 10^-2, length.out = 100)
thedata <- expand.grid(x = gridval,
y = gridval)
thestat <- apply(thedata, 1, compute_stat, rmat = Rginv)
threshold <- qchisq(1 - 0.8, df = 2)
colors <- ifelse(thestat < threshold, "#FF000077", "#00FF0013")
#png("boundry2.png", 640, 480)
plot(y ~ x, data = thedata, bg = colors, pch = 21, col = "#00000000")
theboundry <- t(apply(thedata, 1, compute_scale, rmat = Rginv))
points(pos1 ~ pos2, data = theboundry, col = "blue")
points(neg1 ~ neg2, data = theboundry, col = "purple")
#dev.off()
A:
I understand your problem to be given an $n$ by $n$ matrix $R$ and scalar $c$, find a vector $\mathbf{u}$ such that $\mathbf{u}'R^{-1}\mathbf{u}=c$.
First observe:
You have $n$ unknowns (since $\mathbf{u}$ is an $n$ by 1 vector)
$\mathbf{u}'R^{-1}\mathbf{u}=c$ is a single equation. (It isn't a system of equations.)
In general, there won't be a unique solution $\mathbf{u}$. Almost any vector will work if it is properly scaled.
Solution:
Pick some arbitrary vector $\mathbf{a}$. Let $\mathbf{u} = \lambda \mathbf{a}$. Then $\mathbf{u}'R^{-1}\mathbf{u}=c $ becomes $\lambda^2 \mathbf{a}'R^{-1}\mathbf{a} = c$. Solving for the scalar $\lambda$ we have $\lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$.
For any vector $\mathbf{a}$ such that $\mathbf{a}'R^{-1}\mathbf{a} \neq 0$, we'll have the solution:
$$\mathbf{u} = \lambda \mathbf{a}\quad \text{where} \quad \lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Is the complex form of the Fourier series of a real function supposed to be real?
The question said to plot the $2\pi$ periodic extension of $f(x)=e^{-x/3}$, and find the complex form of the Fourier series for $f$.
My work: $$a_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x/3}e^{-inx}dx=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x(1/3+in)}dx$$
$$=\frac{e^{\pi(\frac{1}{3}+in)} - e^{-\pi(\frac{1}{3}+in)}}{2\pi(\frac{1}{3}+in)}=\frac{1}{\pi(\frac{1}{3}+in)}\sinh(\pi(\frac{1}{3}+in))$$
$$\therefore F(x)=\frac{3\sinh(\pi/3)}{\pi}+\sum_{n=-\infty}^{\infty}\frac{3\sinh(\pi/3+in\pi)}{\pi+3in\pi}\cos(nx)$$
But, this is not always real-valued. Is it possible for the complex Fourier series of a real-valued function to have imaginary coefficients, or is my algebra just wrong?
A:
You are using the formula for the complex fourier coefficients which are usually denoted by $c_n$. These are usually complex, and they lead to the representation:
$f_f(x) = \sum_{n=-\infty}^\infty c_n e^{inx}$
This is still (more or less) the original function and is therefore real.
There is also a transformation into the sinus-cosinus representation:
$f_f(x) = a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)$
Where the $a_n$ and $b_n$ are real if the original function was real.
You can even go back and forth between the 'real' and the 'complex' coefficients. This comes from the fact that you can express the sinus as well as the cosinus as
$\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ and
$\cos(x) = \frac{1}{2}(e^{ix}+e^{-ix})$.
Or the other way around which might be more familiar:
$e^{ix} = \cos(x)+i\sin(x)$
You can find all of this including the formulas for converting the real coefficients $a_n,b_n$ to the complex ones $c_n$ and vice versa here: http://mathworld.wolfram.com/FourierSeries.html
| {
"pile_set_name": "StackExchange"
} |
Q:
Device Token - Apple push Notification Service
is device token changes each time when i opens my application?
Apple server uses the same device token every time or the new regenerated device token?
A:
You can check developer documentaion, following is mentioned there -
The form of this phase of token trust ensures that only APNs generates the token which it will later honor, and it can assure itself that a token handed to it by a device is the same token that it previously provisioned for that particular device—and only for that device.
If the user restores backup data to a new device or reinstalls the operating system, the device token changes.
So they are uniques to iPhone until OS is reinstalled.
You can check details here - http://developer.apple.com/library/ios/#documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/ApplePushService/ApplePushService.html#//apple_ref/doc/uid/TP40008194-CH100-SW12
| {
"pile_set_name": "StackExchange"
} |
Q:
Calculation of integers $b,c,d,e,f,g$ such that $\frac{5}{7} = \frac{b}{2!}+\frac{c}{3!}+\frac{d}{4!}+\frac{e}{5!}+\frac{f}{6!}+\frac{g}{7!}$
There are unique integers $b,c,d,e,f,g$ such that $\displaystyle \frac{5}{7} = \frac{b}{2!}+\frac{c}{3!}+\frac{d}{4!}+\frac{e}{5!}+\frac{f}{6!}+\frac{g}{7!}$
Where $0\leq b,c,d,e,f,g <i$ for $i=2,3,4,5,6,7$. Then the value of $b+c+d+e+f+g = $
$\bf{My\; Try}::$ $\displaystyle \frac{5}{7} = \frac{2520\cdot b+840\cdot c+210\cdot d+42\cdot e+7\cdot f+g}{7\times 720}$
$\displaystyle 2520\cdot b+840\cdot c+210\cdot d+42\cdot e+7\cdot f+g = 720\times 5 = 3600$
Now I did not understand how can I solve after that.
Help Required
Thanks
A:
Look at this equality that you've got: $$ 2520 \cdot b + 840 \cdot c + 210 \cdot d + 42 \cdot e + 7 \cdot f + g = 3600.$$
Note that if you consider everything modulo $7$, then most of the summands disappear, because $2520,840,210,42$ and $7$ are all multiples of $7$. So, taking remainders modulo $7$, we get $g \equiv 2 \pmod 7$. Since $0 \leq g < 7$, it follows that $g = 2$. Now substitute $2$ for $g$ in your equality, subtract $2$ from both sides and divide everything by $7$. You get
$$
360 \cdot b + 120 \cdot c + 30 \cdot d + 6 \cdot e + f = 514.
$$
Now consider both sides modulo $6$, and go on in a similar fashion. You will eventually find the values for all variables.
| {
"pile_set_name": "StackExchange"
} |