text
stringlengths
8
5.77M
John Hack John Hack (November 26, 1842 โ€“ March 29, 1933) was a decorated hero of the Union Army in the American Civil War. He was born in Hessen, Germany and lived in Adrian, Michigan. Medal According to the Military Times Hall of Valor, "on 3 May 1863, while serving with Company B, 47th Ohio Infantry, in action at Vicksburg, Mississippi. Private Hack was one of a party which volunteered and attempted to run the enemy's batteries with a steam tug and two barges loaded with subsistence stores." Hack and nine others in Company B did this while Confederate States Army batteries were shooting at them "under cover of darkness" Hack was awarded the Medal of Honor "for extreme bravery under fire" on January 3, 1907. Bibliography References Category:1842 births Category:1933 deaths Category:Hessian emigrants to the United States Category:Union Army soldiers Category:German-born Medal of Honor recipients Category:United States Army Medal of Honor recipients Category:American Civil War recipients of the Medal of Honor Category:People from Hesse Category:People from Adrian, Michigan
ROBINSON CRUSOE โ€“ CLASSIC TALE โ€“ IN CINEMAS MAY 6TH On a tiny exotic island, Tuesday, an outgoing parrot, lives with his quirky animal friends in paradise. However, Tuesday canโ€™t stop dreaming about discovering the world. After a violent storm, Tuesday and his friends wake up to find a strange creature on the beach: Robinson Crusoe. Tuesday immediately views Crusoe as his ticket off the island to explore new lands. Likewise, Crusoe soon realizes that the key to surviving on the island is through the help of Tuesday and the other animals. It isnโ€™t always easy at first, as the animals donโ€™t speak โ€œhuman.โ€ Slowly but surely, they all start living together in harmony, until one day, when their comfortable life is overturned by two savage cats, who wish to take control of the island. A battle ensues between the cats and the group of friends but Crusoe and the animals soon discover the true power of friendship up against all odds (even savage cats).
--- abstract: 'In this paper a unified approach for power allocation (PA) problem in multi-hop orthogonal frequency division multiplexing (OFDM) amplify-and-forward (AF) relaying systems has been developed. In the proposed approach, we consider short and long-term individual and total power constraints at the source and relays, and devise decentralized low complexity PA algorithms when wireless links are subject to channel path-loss and small-scale Rayleigh fading. In particular, aiming at improving the instantaneous rate of multi-hop transmission systems with AF relaying, we develop (i) a near-optimal iterative PA algorithm based on the exact analysis of the received SNR at the destination; (ii) a low complexity-suboptimal iterative PA algorithm based on an approximate expression at high-SNR regime; and (iii) a low complexity-non iterative PA scheme with limited performance loss. Since the PA problem in multi-hop systems is too complex to solve with known optimization solvers, in the proposed formulations, we adopted a two-stage approach, including a power distribution phase among distinct subcarriers, and a power allocation phase among different relays. The individual PA phases are then appropriately linked through an iterative method which tries to compensate the performance loss caused by the distinct two-stage approach. Simulation results show the superior performance of the proposed power allocation algorithms.' author: - 'Aminย Azari,ย ย Jalilย S.ย Harsini,ย  andย Farshadย Lahouti,ย ' bibliography: - 'IEEEabrv.bib' - 'paper\_re.bib' nocite: - '[@nosrat]' - '[@mul:cho]' - '[@ltea]' - '[@jmimo]' - '[@path]' - '[@sspa]' - '[@ajsrpa]' - '[@agfw]' - '[@ajoipa]' - '[@ssrpa]' - '[@wen]' - '[@opsoppa]' title: 'Power Allocation in Multi-hop OFDM Transmission Systems with Amplify-and-Forward Relaying: A Unified Approach ' --- Introduction ============ relaying and orthogonal frequency division multiplexing (OFDM) are promising techniques for high-speed data communication among wireless devices that may not be within direct transmission range of each other. Relaying protocols are broadly categorized as amplify-and-forward (AF) relaying, in which each relay forwards a scaled version of the received noisy copy of the source signal, and decode-and-forward (DF) relaying, in which each relay forwards a regenerated version of the received noisy copy of the source signal. AF relays may also be categorized as blind/fixed gain, channel assisted, and channel noise assisted, based on how source-relay channel state information (CSI) and noise statistics affect the relay gains [@ca:cna]. The capacity analysis and transmission protocol design over relay channels have attracted lots of research activities in the past decade [@kram]- [@lan]. As relays have shown their merit for data transfer purposes, multi-hop communications have been also included in advanced wireless standards such as 802.11n [@ieee-new], WiMAX, and LTE-Advanced [@ieee-ano]- [@lteee]. To achieve power efficiency in multi-hop transmission systems, it is necessary to devise efficient power allocation (PA) strategies for the source and intermediate relays, when a multi-hop data transmission path is setup. For the simplest form of dual-hop relaying systems, the PA problem has been investigated in [@paaf]- [@ont:iha]. Specifically in [@ont:iha], for a two-hop OFDM communication system with a given power budget, the authors presented the optimal power allocation at the relay (source) node for a given source (relay) power allocation scheme, that maximizes the instantaneous rate of the system. Then, using simulations they showed that by iterative power allocation between source and relay, a higher gain is achieved. A jointly optimal subcarrier pairing and power allocation scheme which maximizes the throughput of OFDM amplify-and-forward relaying systems subject to a statistical delay constraint is investigated in [@new3]. Power allocation for dual-hop OFDM relaying systems has been also considered in [@pacpa]- [@new1]. The power allocation problem for relaying system models with more than two hops over narrowband fading channels has been considered in [@opt:moh], [@poaf]. Especially in [@poaf], aiming at maximizing the instantaneous rate, the power allocation solution for AF relaying protocol over narrowband Rayleigh fading channels has been provided. In [@new2] a path searching algorithm has been presented to find the best links among relays and then two subcarrier allocation algorithms are presented which aim at resource utilization improvement. Optimal and suboptimal power allocation schemes for multi-hop OFDM systems with DF relaying protocol has been developed in [@osopa]. Adaptive power allocation algorithms for maximizing system capacity (when full CSI is available), and minimizing system outage probability (with limited CSI) have been proposed in [@apafor] for multi-hop DF transmission systems under a total power constraint. Aiming at maximizing the end-to-end average transmission rate under a long-term total power constraint, the authors in [@e2epa2] developed a resource allocation scheme for multi-hop OFDM DF relaying system in which the power allocated to each subcarrier and the transmission time per hop have been specified. In [@apafmh], the authors proposed the solution for power allocation problem in multi-hop OFDM relaying system (AF and DF) under total short-term power constraint, where the PA and capacity analysis is developed based on a high SNR approximation in the AF relaying protocol. The approximation used in [@apafmh] has been originally proposed in [@out:moh] and performs well for small number of hops with high received SNR. In low-to-medium SNR regime or for multi-hop systems with more than three hops, this approximation loses its functionality in design of PA schemes. The multi-hop OFDM transmission system has been also considered in [@hajagha], where joint power allocation and subcarrier pairing solutions for AF and DF relaying protocols have been devised under short-term total power constraint. The analysis for AF relaying protocol in [@hajagha] is also based on the high SNR approximation as discussed above. Paperโ€™s contributions --------------------- Although several research works have been reported on the power allocation problem in multi-hop OFDM systems, to the best of the authorโ€™s knowledge, no attempt has been made to develop a unifying approach addressing different aspect of these systems. In this paper, we focus on multi-hop OFDM systems with AF relaying and different system power constraints. We developed an unified framework for efficient power allocation which includes both iterative and non-iterative solutions. In particular, aiming at maximizing the instantaneous system rate under individual and total short and long-term power constraints, the following power allocation schemes have been devised: (i) A near-optimal iterative PA algorithm which is developed based on the analysis of an exact expression for the received SNR at the destination; (ii) A low complexity-suboptimal iterative PA algorithm in which we use an high-SNR approximation of the system rate for design purposes; and (iii) A low complexity-non iterative PA scheme based on a high-SNR rate analysis at the destination. The rest of this paper is organized as follows. In Section II we introduce the system model and the power allocation problem formulation. An iterative PA solution based on the analysis of the exact destination SNR is presented in Section III. In Section IV we provided sub-optimal PA schemes based on the analysis of an approximated high SNR expression at the destination. Simulation results are provided in Section IV, and concluding remarks are presented in Section V. System Model and Problem Formulation {#broad} ===================================== Fig. \[fig1\] shows the multi-hop transmission system model under consideration, where OFDM is utilized for broadband communication among the consecutive nodes. We assume that the system uses a routing algorithm, and therefore the path between the source and destination nodes is already established. Here the source node, $T_0$, sends data bits to the destination node $T_N$ via $N$-1 intermediate relay nodes, $T_1,T_2, \ldots,T_{N-1}$, over orthogonal time slots and orthogonal subcarriers. The fading gain of the narrowband subchannel $i$ (corresponding to the $i$th subcarrier) between nodes $T_{k-1}$ and $T_k$, denoted by $a_{k,i}$, is modeled as a zero mean circularly symmetric complex Gaussian random variable with variance $\sigma_{k,i}^2$. In an AF multi-hop relaying system, each relay first amplifies the signal received from its immediate preceding node, and then forwards it to the next node in the subsequent time slot. The amplification gain in $i$th subcarrier of node $T_k$ is adapted based on the instantaneous fading amplitude of the channel between nodes $T_{k-1}$ and $T_k$, i.e. ${\lvert a_{k,i}\rvert}$. To ensure an output relay transmit power $P_{k,i}$ on $i$th subcarrier, the amplification gain is adjusted as [@ont:iha]: ![Multi-hop OFDM relaying system with $N$-1 intermediate relays.[]{data-label="fig1"}](ofdm.jpg){width="3.5in"} $$\begin{array}{l} A_{{k,i}}^2=\frac {{P}_{{k,i}}}{{P}_{{k-1,i}}{{\lverta_{{k,i}}\rvert}^2}+N_{{0}_{k,i}}} \; \nonumber\hspace{6mm}\text{for}\hspace{4mm} k=1, \ldots,N-1; i=1, \ldots,N_{F} \end{array}$$ where $P_{{0},i}$ and $P_{k,i}$ denote the transmission powers at source and $k$th relay for $i$th subcarrier, respectively. In this model, the number of subcarriers, i.e., the number of points for fast Fourier transform (FFT), and the noise power at $k$th relay within $i$th subcarrier are denoted by $N_{F}$ and $N_{{0}_{k,i}}$ respectively. For the considered multi-hop OFDM system model, the instantaneous received SNR over the $i$th subcarrier at the destination node is given by $$\label{sntt36} \begin{array}{l} \gamma_{T}^i=\left(\prod_{k=1}^N (1+\frac{1}{\tilde{\gamma}_{k,i}})-1\right)^{-1}, \end{array}$$ where $\tilde{\gamma}_{k,i} = {\frac{P_{k-1,i}}{{N_{{0}}}_{k,i}}}{\lvert{a_{k,i}}\rvert}^2$ denotes the instantaneous received SNR over the $i$th subcarrier of $k$th hop with the average $\tilde{\Gamma}_{k,i} = {\frac{P_{k-1,i}}{{N_{{0}}}_{k,i}}}{{\sigma_{k,i}^2}}$. Using , the instantaneous rate of the end-to-end multi-hop system is given by $$\label{Ccap} \begin{array}{l} C=\frac{1}{N}\sum_{i=1}^{N_{F}}{\log(1+\gamma_{T}^i)}. \end{array}$$ Considering $\alpha_{k,i}=\frac{P_{k,i}}{P_T}$ and ${\gamma}_{k,i} = \frac{P_{T}{{\lverta_{k,i}\rvert}}^2}{{N_{{0}}}_{k,i}}$, $C$ may be rewritten as follows $$\label{Ccap1} \begin{array}{l} C=\frac{1}{N}\sum_{i=1}^{N_{F}}{\log\left(1+\frac{1}{\prod_{k=1}^N (1+\frac{1}{{\alpha_{k,i}\gamma}_{k,i}})-1}\right)}. \end{array}$$ In a Rayleigh fading environment, ${\gamma}_{k,i}$ follows an exponential distribution with the average ${\Gamma}_{k,i} = \frac{P_{T}{\sigma_{k,i}}^2}{{N_{{0}}}_{k,i}}$. Given the power constraint, here the goal is to find the transmit powers of the subcarriers at the source and the relay nodes such that the instantaneous rate in is maximized. In this paper, we consider power allocation optimization problem under short-term power constraints (STPC), long-term individual power constraints (LTIPC), and long-term total power constraints (LTTPC). The general form of power allocation optimization (PA) problem in this work is as follows. $$\label{op2op0} \begin{array}{l} \hspace{5mm}\max_{\alpha_{k,i}} \quad C \qquad \mathrm{s.t.} \\ \quad\sum_{i=1}^{N_{F}}\sum_{k=0}^{N-1}\alpha_{k,i}=1 \hspace{18mm} \text{STPC}\\ \\ \quad\mathbb{E}(\alpha_{k,i})=\frac{1}{N \times N_{F}} \hspace{24mm} \text{LTIPC}\\ \\ \quad\sum_{i=1}^{N_{F}}\sum_{k=0}^{N-1}\mathbb{E}(\alpha_{k,i})=1 \hspace{13mm} \text{LTTPC}\\ \end{array}$$ To explicitly identify the way power is distributed among different subcarriers and nodes, we denote by $P_{i}$ and $P_{k,i}$, respectively, the allocated power to $i$th subcarrier (for all nodes) and the allocated power to $i$th subcarrier in $k$th node (hop). We also define two new nonnegative PA coefficients $\mu_{i}$ and $\beta_{k,i}$ as follows $$\begin{aligned} P_{i}=&\mu_{i}\times P_{T}\\ \text{and} \quad P_{k,i}=&\beta_{k,i}\times P_i=\beta_{k,i}\times\mu_{i}\times P_{T},\end{aligned}$$ where $\alpha_{k,i}=\beta_{k,i}\times\mu_i$. The optimization problem (3) may now be rewritten as follows with $\mu_{i}$ and $\beta_{k,i}$ as optimization variables, $$\label{op2op1} \begin{array}{l} \hspace{5mm}\max_{\beta_{k,i},\mu_i} \quad C \qquad \mathrm{s.t.} \\ \mathrm{C1.} \quad\\ \hspace{6mm}\sum_{i=1}^{N_{F}}\sum_{k=0}^{N-1}\mu_i\beta_{k,i}=1 \hspace{25mm} \text{STPC}\\ \hspace{6mm} \mathbb{E}(\mu_{i})=\frac{1}{N_{F}}, \quad \mathbb{E}(\beta_{k,i})=\frac{1}{N}\hspace{17mm} \text{LTIPC}\\ \hspace{6mm} \sum_{i=1}^{N_{F}}\mathbb{E}(\mu_{i})=1 \quad \sum_{k=0}^{N-1}\mathbb{E}(\beta_{k,i})=1 \hspace{2mm}\text{LTTPC}\\ \mathrm{C2.} \quad \mu_i \geq 0 \quad i= 1,\ldots, N_{F}\\ \mathrm{C3.} \quad \beta_{k,i} \geq 0 \quad k= 0,\ldots, N-1 ; i= 1,\ldots, N_{F}. \end{array}$$ We note that the allocation of power to subcarriers (by finding $\mu_i$) and to subcarrier per node (by finding $\beta_{k,i}$) in problem (6) is equivalent to finding $\alpha_{k,i}$ in (3). Unfortunately, the power allocation problems (3) and (6) are too complicated and cannot be solved with known optimization solvers. Hence, in the subsequent sections to find efficient solutions to under different power constraints, we develop iterative and non-iterative algorithms based on the exact and approximate SNR expressions.\ Iterative Power Allocation ========================== In this Section, we consider the power allocation problem (6) as alternate maximization over two simpler power allocation optimization sub-problems. In the first sub-problem, the optimized $\{\beta_{k,i}\}$ is obtained assuming that $\{\mu_{i}\}$ is available. In the second sub-problem, $\{\mu_{i}\}$ is determined for a given set of $\{\beta_{k,i}\}$. Next, we provide an iterative PA algorithm in which the two subproblems are alternately considered with the output of one as the input of the other. Numerical evaluation in Section V verifies the effectiveness of the proposed approach. Sub-problem 1: Power allocation among relays -------------------------------------------- Given that the power allocated to each of the subcarriers $\{\mu_i\}$ is already known, the power allocation problem among relays in one subcarrier is equivalent to the power allocation problem in a multi-hop system with narrowband fading model. Here, we provide optimal solutions for the PA problems in multi-hop narrowband communication systems with individual long-term and total power constraints. We note that in an AF multi-hop transmission system, maximization of the instantaneous rate of the system is equivalent to maximizing the instantaneous received SNR [@poaf]. The instantaneous received SNR at the destination of multi-hop system given in may be expressed as follows: $$\begin{array}{l} \ln\left(1+{\gamma_{T}^i}^{-1}\right)=\ln\prod_{k=1}^N( 1+\frac{1}{\tilde{\gamma}_{k,i}})=\sum_{k=1}^N \ln(1+\frac{1}{{\tilde{\gamma}_{k,i}}}). \end{array}$$ Then, $\gamma_T^i$ is found as $$\label{Gamma_t} \begin{array}{l} \gamma_{T}^i=\left[\exp \left(\sum_{k=1}^N\ln(1+\frac{1}{\tilde{\gamma}_{k,i}})\right)-1\right]^{-1}. \end{array}$$ Since, maximizing $\gamma_{T}^i$ is equivalent to minimizing the argument of exponential function on the right hand side of , we can conclude that $$\label{newmax} \begin{array}{l} \text{arg}\max_{\{\beta_{k,i}\}}\gamma_{T}^i = \text{arg}\min_{\{\beta_{k,i}\}}{\sum_{k=1}^N \ln(1+\frac{1}{\tilde{\gamma}_{k,i}})}. \end{array}$$ Using , the optimization problem for the $i$th subcarrier under LTIPC is written as follows: $$\label{optschaver1} \begin{array}{l} \hspace{5mm} \min_{\{\beta_{k,i}\}} \quad \sum_{k=0}^{N-1} \ln(1+\frac{1}{\beta_{k,i}\mu_i\gamma_{k+1,i}}) \\ \\ \mathrm{s.t.} \quad \mathbb{E}(\beta_{k,i})=\frac{1}{N},\quad k=0, 1, \ldots, N-1, \end{array}$$ where $\{\mu_i\}$ is given. In appendix \[ap1conv\], it is shown that the objective function in is convex. Since the constraints are linear the main problem is convex, and therefore we use the Lagrange method to obtain the optimal solution. The Lagrangian function is given by $$\begin{array}{l} \mathcal{L}= \sum_{k=0}^{N-1} \ln(1+\frac{1}{\beta_{k,i} \mu_i\gamma_{k+1,i}}) +\sum_{k=0}^{N-1} \lambda_k\left(\mathbb{E}(\beta_{k})-\frac{1}{N}\right)\\ \end{array}$$ where $\{ \lambda _{k,i} \}_{k = 0}^{N - 1}$ are Lagrange multipliers. By taking the derivative of $\mathcal{L}$ with respect to $\beta_{k,i},k = 0,1,...,N - 1$, one obtains $$\label{betaop} \begin{array}{l} \frac{\partial\mathcal{ L}}{\partial \beta_{k,i}}=0 \quad \rightarrow \quad \lambda_{k,i}-\frac {1}{\beta_k(\mu_i\gamma_{k+1,i}\beta_{k,i})} =0 \\ \\ \hspace{12mm}\rightarrow\beta_{k,i}=\frac{-1+\sqrt{1+4\frac{\mu_i\gamma_{k+1,i}}{\lambda_{k,i}}}}{2\mu_i\gamma_{k+1,i}}. \end{array}$$ Substituting in the $k$th individual power constraint in (10) leads to $$\label{16} \begin{array}{l} \mathbb{E}(\beta_{k,i})=\frac{1}{N} \quad \rightarrow \quad \mathbb{E}\left(\frac{-1+\sqrt{1+4\frac{\mu_i\gamma_{k+1,i}}{\lambda_{k,i}}}}{2\mu_i\gamma_{k+1,i}}\right)=\frac{1}{N}\\ \\ \hspace{6mm}\rightarrow\int_{{0}}^{\infty} \frac{\sqrt{1+4\frac{\mu_i\gamma_{k+1,i}}{\lambda_{k,i}}}-1}{2\mu_i\gamma_{k+1,i}}\frac{ e^{\frac{-\mu_i\gamma_{k+1,i}}{\mu_i\Gamma_{k+1,i}}}}{\mu_i \Gamma_{k+1,i}}{d}_{\gamma_{k+1,i}}=\frac{1}{N}. \end{array}$$ To find the Lagrange multiplier $\lambda_{k,i}$, the integral in can be numerically evaluated and a bisection root-finding method [@XX] can be utilized. From and , it is evident that the power coefficient at the $k$th node, $\beta_{k,i}, k = 0, \ldots, N - 1$, is only dependent on the fading gain of its immediate forward channel. As a result, the proposed power allocation scheme can be implemented in a decentralized manner. Such a PA scheme is potentially applicable in ad-hoc wireless networks over narrowband channels. Using similar steps for the LTTPC case, the PA coefficient is obtained by $$\begin{array}{l} \beta_{k,i}=\frac{-1+\sqrt{1+4\frac{\mu_i\gamma_{k+1,i}}{\lambda_{i}}}}{2\mu_i\gamma_{k+1,i}}, \end{array}$$ where the constant $\lambda$ is calculated using the corresponding power constraint follows $$\sum_{k=0}^{N-1}\mathbb{E}(\beta_{k,i})=1.$$ Moreover, for the STPC case, the procedure is similar to the LTIPC scenario. Sub-problem 2: Power allocation among subcarriers ------------------------------------------------- The instantaneous rate, $C$, can be written as a function of $\mu_i$s and $\beta_{k,i}s$, as follows $$\label{casl} \begin{array}{l} C=\frac{1}{N}\sum_{i=1}^{N_{F}}{\log\left(1+\frac{1}{\prod_{k=0}^{N-1} (1+\frac{1}{{\beta_{k,i}\mu_i\gamma}_{k+1,i}})-1}\right)}. \end{array}$$ Given $\beta_{k,i}$s, $C$ is to be maximized by finding the optimal $\mu_i$s. We start by expanding the expression for $\gamma_T^i$, as follows $$\begin{array}{l} \prod\limits_{k = 0}^{N-1} {(1 + \frac{1}{{{\beta _{k,i}}{\mu _i}{\gamma _{k+1,i}}}})} = \\ 1+ \frac{1}{{{\mu _i}}}(\sum\limits_{k= 0}^{N-1} {\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}} ) + \frac{{{1 \mathord{\left/ {\vphantom {1 {2!}}} \right. \kern-\nulldelimiterspace} {2!}}}}{{{\mu _i}^2}}(\sum\limits_{k = 0}^{N-1} {\sum\limits_{_{j \ne k}^{j = 0}}^{N-1} {\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}} \frac{1}{{{\beta _{j,i}}{\gamma _{j+1,i}}}}} ) + \cdots\\ + \frac{{{1 \mathord{\left/ {\vphantom {1 {N!}}} \right. \kern-\nulldelimiterspace} {N!}}}}{{{\mu _i}^N}}(\sum\limits_{k = 0}^{N-1} {\sum\limits_{_{j \ne k}^{j = 0}}^{N-1}{ \cdots \sum\limits_{_{l \ne k,l\ne j,\cdots }^{\hspace{4mm}l = 0}}^{N-1} {\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}} \frac{1}{{{\beta _{j,i}}{\gamma _{j+1,i}}}} \cdots \frac{1}{{{\beta _{l,i}}{\gamma _{l+1,i}}}}} } ). \end{array}$$ Then, one can rewrite the rate as $$\label{casl0} \begin{array}{l} C=\frac{1}{N}\sum_{i=1}^{N_{F}}\log\left(1+\frac{{\mu_i}^N}{A_{{1},i}{\mu_i}^{N-1}+A_{2,i}{\mu_i}^{N-2}\cdots +A_{N,i} }\right), \end{array}$$ where $$\label{A1} \begin{array}{l} {A_{{1},i}} = \frac{1}{{1!}}\sum\limits_{k = 0}^{N-1} {\frac{1}{{{\beta_{k,i}}{\gamma_{k+1,i}}}}} \\ {A_{2,i}} = \frac{1}{{2!}}\sum\limits_{k = 0}^{N-1} {\sum\limits_{_{j \ne k}^{j = 0}}^{N-1} {\frac{1}{{{\beta_{k,i}}{\gamma_{k+1,i}}}}} \frac{1}{{{\beta _{j,i}}{\gamma _{j+1,i}}}}} \\ \hspace{2mm}\vdots \\ {A_{N,i}} = \frac{1}{{N!}}\sum\limits_{k = 0}^{N-1} {\sum\limits_{_{j \ne k}^{j = 0}}^{N-1} { \cdots \hspace{-3mm}\sum\limits_{_{l \ne k,l\ne j,\cdots }^{\hspace{4mm}l = 0}}^{N-1} \hspace{-3mm}{\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}} \frac{1}{{{\beta _{j,i}}{\gamma _{j+1,i}}}} \cdots \frac{1}{{{\beta _{l,i}}{\gamma _{l+1,i}}}}} }. \end{array}$$ Under LTIPC, we construct the following Lagrangian function $$\label{opop1} \begin{array}{l} \mathcal{L}={\sum_{i=1}^{N_{F}}{\log(1+\frac{{\mu_i}^N}{A_{{1},i}{\mu_i}^{N-1}+A_{2,i}{\mu_i}^{N-2}\cdots +A_{N,i}}}})\\ \hspace{30mm}-\left(\sum_{i=1}^{N_{F}}\lambda_i({\mathbb{E}(\mu_{i})-\frac{1}{N_{F}}})\right)-\boldsymbol{v}^{T}\boldsymbol{\mu}, \end{array}$$ where $\boldsymbol{v}^{T}=\left[v_1, \ldots, v_{N_{F}}\right]$ and $\boldsymbol{\mu}=\left[\mu_1, \cdots \mu_{N_{F}}\right]$; and $\lambda_i$, and $v_i$ are the Lagrange multipliers corresponding to the constraints C1 and C2 in , respectively. Furthermore, by taking the derivative of $\mathcal{L}$ with respect to ${\mu_i}, i = 1,...,N_{F}$, one obtains $$\label{eqn1} (\lambda_i + \nu_i ){D^2} + ((\lambda_i + \nu_i ){\mu _i}^N + N{\mu _i}^{N - 1})D - {\mu _i}^N{D^{'}} = 0,$$ where $$\begin{array}{l} D = {A_{{1},i}}{\mu _i}^{N - 1} + {A_{2,i}}{\mu _i}^{N - 2} \cdots+ {A_{N,i}}\\ \text{and} \quad {D^{'}} = {{\partial D} \mathord{\left/ {\vphantom {{\partial D} \partial }} \right. \kern-\nulldelimiterspace} \partial }{\mu _i}. \end{array}$$ To find the PA coefficient $\mu_i$, the polynomial equation in is to be solved numerically, by considering the power constraint. The difficulty for solving increases when the number of hops increases. For STPC and LTTPC, the procedure is the same as LTIPC, with their corresponding power constraints. The iterative scheme {#IPA} -------------------- In this subsection, we present an algorithm which iterates between power allocation among relays and subcarriers to maximize the instantaneous transmission rate of the network. As verified in Section VI, such an iterative solution can be used as an upper bound for the performance evaluation of the proposed suboptimal solutions in Section IV. In this algorithm, optimizations in sub-problem 1 and sub-problem 2 are repeated alternately, such that the PA parameters obtained from the previous optimization are the input for the next one. [p[0.2 cm]{}p[7.5 cm]{}]{}\ & Iterative Algorithm\ 1. & Initialize the subcarrier PA coefficients to $\mu_i=1/N_{F}, \forall i= 1,\ldots, N_{F}$.\ 2. & Given $\mu_i$s, find $\beta_{k,i}$s from the sub-problem 1.\ 3. & Find the instantaneous rate ${C}$ by substituting the PA coefficients $\beta_{k,i}$ and $\mu_i$ in .\ 4. & Given $\beta_{k,i}$s, find $\mu_i$s from the sub-problem 2.\ 5. & Find the instantaneous rate ${C}$ in using $\beta_{k,i}$ and $\mu_i$ obtained in steps 2 and 4, respectively.\ 6. & If the difference between rates found in steps 3 and 6 is above a given small value, repeat steps 2 to 6, if not (or after a predefined number of iterations) go to step 7.\ 7. & Report $\mu_i$ and $\beta_{k,i}$, $\forall i=1,\ldots, N_{F}$ ,$k=1,\ldots, N-1$.\ \ Power Allocation Schemes in High-SNR Regime {#asy} =========================================== In this Section, we focus on a high SNR regime in the multi-hop system and present efficient solutions for the power allocation problem in (6). The motivation to study such power allocation algorithms is their simplicity with respect to the iterative solution presented in the previous section. We start by rewriting the instantaneous rate expression in as $$\label{casl1} \begin{array}{l} C=\frac{1}{N}\sum_{i=1}^{N_{F}}\log\left(1+\frac{1}{\frac{A_{{1},i}}{\mu_i}+\frac{A_{2,i}}{{\mu_i}^{2}}+\cdots +\frac{A_{N,i}}{{\mu_i}^{N}} }\right). \end{array}$$ In high SNR regime, the parameters $A_{k,i}$ in (17), for $k=2,\cdots, N$, are negligible in comparison with $A_{{1},i}$, hence we neglect higher order terms and rewrite the expression in as follows $$\label{casl5} \begin{array}{l} C_{app}= \frac{1}{N}\sum_{i=1}^{N_{F}}\log\left(1+\frac{\mu_i}{A_{{1},i} }\right).\\ \end{array}$$ Substituting $A_{{1},i}$ from , the instantaneous rate is expressed as $$\label{casl4} \begin{array}{l} C_{app}=\frac{1}{N}\sum_{i=1}^{N_{F}}\log(1+\frac{\mu_i}{\sum\limits_{k = 0}^{N-1} {\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}}}).\\ \end{array}$$ Using , iterative and non-iterative power allocation schemes are developed in the following subsections. Iterative PA in high-SNR regime {#asy1} ------------------------------- In high-SNR regime, the steps 2 and 4 in the iterative algorithm can be implemented in a simpler way, as described below. *A.1 Sub-problem 1: power allocation among relays* According to , the instantaneous rate of the $i$th subcarrier is given by $$\label{casl3} \begin{array}{l} {C_i}_{app}=\log(1+\frac{\mu_i}{\sum\limits_{k = 0}^{N-1} {\frac{1}{{{\beta _{k,i}}{\gamma _{k+1,i}}}}}}).\\ \end{array}$$ By formulating the power allocation problem for multi-hop narrowband system, the general optimization problem is written as follows: $$\label{optproblemsch} \begin{array}{l} \hspace{10mm}\max_{\{\beta_{k,i}\}} {C_i}_{app} \\ \mathrm{s.t.}\\ \hspace{5mm}\sum_{k=0}^{N-1}\beta_{k,i}=1\hspace{40.5mm} \hspace{5mm}\text{STPC} \\ \hspace{5mm}\mathbb{E}(\beta_{k,i})=\frac{1}{N},\quad k=0, 1, \ldots, N-1\hspace{15.5mm} \text{LTIPC} \\ \hspace{5mm}\sum_{k=0}^{N-1}\mathbb{E}(\beta_{k,i})=1\hspace{40.5mm} \text{LTTPC} \\ \end{array}$$ which is a convex power allocation problem and using the Lagrange method, the PA coefficients under long-term individual power constraint are derived as $$\label{be1} \begin{array}{l} {\beta _{k,i}} = \frac{1}{{N\sqrt {\frac{{\pi {\gamma _{k + 1,i}}}}{{{\Gamma _{k + 1,i}}}}} }} = \frac{{\left| {{\sigma _{k + 1,i}}} \right|}}{{N\sqrt \pi \left| {{a_{k + 1,i}}} \right|}}. \end{array}$$ And, the power allocation coefficients under LTTPC are calculated as $$\label{be2} \begin{array}{l} {\beta _{k,i}} = \frac{1}{{\sqrt \pi \sum\limits_{j = 1}^N {\sqrt {\frac{{{\gamma _{k + 1,i}}}}{{{\Gamma _{j,i}}}}} } }} = \frac{1}{{\sqrt \pi \sum\limits_{j = 1}^N {\frac{{\left| {{a_{k + 1,i}}} \right|}}{{\left| {{\sigma _{j,i}}} \right|}}} }}. \end{array}$$ Moreover, the power allocation coefficients under STPC are derived as follows $$\label{be2} \begin{array}{l} {\beta _{k,i}} =\frac{1}{\sum\limits_{j=1}^{N}\sqrt{\frac{\gamma_{k+1,i}}{\gamma_{j,i}}}}=\frac{1}{\sum\limits_{j=1}^{N}{\lvert\frac{\alpha_{k+1,i}}{\alpha_{j,i}}\rvert}}. \end{array}$$ Note that the PA solutions in - were first derived in [@poaf], where the authors investigated high-SNR power allocation problems for a multi-hop narrowband system model. However, here we use such solutions to provide power allocation scheme for a wideband multi-hop system with OFDM modulation. *A.2 Sub-problem 2: power allocation among subcarriers* In high-SNR regime, the optimization problem in is rewritten as $$\label{opt42news} \begin{array}{l} \hspace{6mm}\max_{\{{\mu_i}\}} \quad C_{app} \quad \mathrm{s.t.}\\ \mathrm{C1.} \\ \hspace{5mm}\sum_{i=1}^{N_{F}}\mu_{i}=1\hspace{15mm} \text{STPC}\\ \hspace{5mm}\mathbb{E}(\mu_{i})=\frac{1}{N_{F}}\hspace{15.5mm} \text{LTIPC}\\ \hspace{5mm}\sum_{i=1}^{N_{F}}\mathbb{E}(\mu_{i})=1\hspace{10mm} \text{LTTPC}\\ \mathrm{C2.} \quad \mu_i \geq 0 \quad i= 1,\ldots, N_{F}. \end{array}$$ where $C_{app}$ is given in . Since the objective function in is concave (see Appendix \[ap2conc\]) and the constraints are linear, the optimization problem (23) has a unique optimal solution. Under LTIPC, one can construct the Lagrangian function as follows $$\label{opop1} \begin{array}{l} \mathcal{L}={\sum_{i=1}^{N_{F}}{\log(1+\frac{\mu_i}{\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}}}})\\ \hspace{30mm}-\left(\sum_{i=1}^{N_{F}}\lambda_i({\mu_i-\frac{1}{N_{F}}})\right)-\boldsymbol{v}^{T}\boldsymbol{\mu}, \end{array}$$ where $\lambda_i$, $i=1, 2, \ldots,N_{F}$, and the vector $\boldsymbol{v}^{T}=\left[v_1, \ldots, v_{N_{F}}\right]$ are Lagrange multipliers. The optimized $\mu_i$ is calculated by setting $\frac{\partial\mathcal{ L}}{\partial \mu_i} =0$, as follows $$\label{most1} \begin{array}{l} \mu_i=\left[\frac{1}{\lambda_i}-\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}\right]^{+}. \end{array}$$ In (25), the constant $\lambda_i$ is found to satisfy the constraint C1 in with equality, i.e., $$\begin{array}{l} \mathbb{E}\bigg(\left[\frac{1}{\lambda_i}-\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}\right]^{+}\bigg)=\frac{1}{N_{F}} \end{array}$$ or $$\label{39} \begin{array}{l} \int_0^{\frac{{{1}}}{{ {\lambda _i}}}} {{f_Y}(y)dy} =\frac{1}{N_{F}} \end{array}$$ where $$\begin{array}{l} Y=\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}. \end{array}$$ In , $f_Y(y)$ is the probability density function (PDF) of $Y$. The PDF of $Y$ is calculated by the convolution of probability density functions for $\frac{1}{\gamma_{k+1,i}\beta_{k,i}}$, $k\in\{1,\cdots,N-1\}$, where $\gamma_{k+1,i}$ follows an exponential distribution, and $\beta_{k,i}$ is a given constant. Using the same procedure, the solution for this sub-problem 2, under the LTTPC is derived as $$\label{most2} \begin{array}{l} \mu_i=\left[\frac{1}{\lambda}-\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}\right]^{+} \end{array}$$ where the constant $\lambda$ is found to satisfy the constraint C1 in with equality, i.e., $$\begin{array}{l} \sum\limits_{i=1}^{N_{F}}\mathbb{E}\bigg(\left[\frac{1}{\lambda}-\sum_{k=0}^{N-1}\frac{1}{\gamma_{k+1,i}\beta_{k,i}}\right]^{+}\bigg)=1. \end{array}$$ The procedure for STPC is the same as LTTPC. Non-iterative PA scheme in high-SNR regime using channel statistics ------------------------------------------------------------------- However in previous sections we have presented two PA schemes which utilize CSI for power allocation among subcarriers and relays, low complexity schemes which can work with channel statistics instead of the CSI are always appreciated. As we saw in Section \[asy1\], specially in -, power allocation among relays is independent from power allocation among subcarriers. This fact motivates us to present a non-iterative power allocation algorithm in high-SNR regime. To this end, we insert the derived $\beta_i$:s in - into and . Then, the PA coefficient ${\alpha _{k,i }}={\mu _i}{\beta _{k,i}}$ is derived by substituting the instantaneous values with their means. In this case, the power allocation coefficient under long-term total power constraint will be as: $${\alpha _{k,i}} = {\left[ {\frac{1}{\lambda } - \sum\limits_{k = 0}^{N-1} {\frac{{\sqrt \pi }}{{\sqrt {{\gamma _{k + 1,i}}} }}\sum\limits_{j = 1}^N {\frac{1}{{\sqrt {{\Gamma _{j,i}}} }}} } } \right]^ + }{\left[ {\sum\limits_{j = 1}^N {\sqrt {\frac{{\pi {\rm{ }}{\gamma _{k + 1,i}}}}{{{\Gamma _{j,i}}}}} } } \right]^{ - 1}}.$$ For the case of long-term individual power constraint, the solution is given by $${\alpha _{k,i}} = \begin{array}{*{20}{l}} {{{\left[ {\frac{1}{{{\lambda _i}}} - \sum\limits_{k = 0}^{N-1} {\frac{{N\sqrt \pi }}{{\sqrt {{\gamma _{k + 1,i}}{\Gamma _{k + 1,i}}} }}} } \right]}^ + }} \end{array}\frac{{\sqrt {{\Gamma _{k + 1,i}}} }}{{N\sqrt {\pi {\rm{ }}{\gamma _{k + 1,i}}} }}.$$ Moreover, by considering a short-term power constraint, we obtain $${\alpha _{k,i}} = {\left[ {\frac{1}{\lambda } - \sum\limits_{k = 0}^{N - 1} {\frac{1}{{\sqrt {{\gamma _{k + 1,i}}} }}\sum\limits_{j = 1}^N {\frac{1}{{\sqrt {{\gamma _{j,i}}} }}} } } \right]^ + }{\left[ {\sum\limits_{j = 1}^N {\frac{{{\gamma _{k + 1,i}}}}{{{\gamma _{j,i}}}}} } \right]^{ - 1}},$$ where $\lambda$ and $\lambda_i$ are found using STPC, LTTPC, and LTIPC power constraints in , respectively. In the next section we evaluate the performance of proposed power allocation algorithms. Performance Evaluation ====================== This Section presents simulation results for performance evaluation of the proposed power allocation schemes. In simulations, we assume a multi-hop AF relaying system over Rayleigh fading channels under short and long-term individual and total power constraints. In all following figures, EPA, ASY, IT-EXA, and IT-ASY refer, respectively, to the equal PA method, the PA scheme using high-SNR analysis (Section IV), the iterative power allocation algorithm using exact SNR analysis (Section III), and the iterative power allocation algorithm using high-SNR analysis (Section IV).\ Fig. \[fig2\] shows the average rate of the 2-hop OFDM relaying system with $N_{F}=$64 subcarriers versus average SNR of the direct link $\Gamma_0$, under long-term total power constraints. Balanced and unbalanced links are considered here. For modeling unbalanced links in multi-hop system, we adopt the setup of [@poaf], in which it is assumed that the $k$th terminal, $k = 1,..., N+1,$ is located at the distance $d_k = \frac{2k}{(N+1)(N+2)}d$ from its previous terminal, where $d$ is the distance between source and destination. Hence, using the Friis propagation formula [@rap], the average SNR of $k$th hop is given by $\Gamma_k=(\frac{(N+1)(N+2)}{2k})^{\delta}\Gamma_0$, where $\delta$ is the path loss exponent and $\Gamma_0$ is the average SNR of direct link. We consider $\delta=4$ in this work. For balanced links, the inter-distance among nodes is considered the same, then the average SNR of $k$th hop is given by $\Gamma_k=(N+1)^{\delta}\Gamma_0$. In Fig. \[fig2\], one can see that the iterative scheme (IT-EXA) acts as an upper-bound for other power allocation schemes. The iterative scheme using high-SNR analysis (IT-ASY) has acceptable performance with a lower computational complexity because we have closed-form expressions for $\beta_i$:s and $\mu_i$:s in IT-ASY scheme. Also the non-iterative scheme using high-SNR analysis (ASY) shows a superior performance with respect to equal power allocation, however, the difference between ASY and IT-EXA scheme is quite large because we use channel statistics instead of the CSI in ASY scheme. Fig. \[fig3\] shows the average rate of the 3-hop OFDM relaying system with 64 subcarriers versus average SNR of the direct link, $\Gamma_0$, under short-term power constraints. One can see that the performance of iterative algorithm based on exact SNR analysis is superior than other schemes. Moreover, the performance of the non-iterative power allocation scheme is close to that of the low complexity iterative algorithm, and both of these schemes provide significant gain over the equal power allocation. The same results are depicted in Fig. \[fig35\] for 2-hop system under long-term total power constraint. In Fig. \[fig4\], the outage probability of 2-hop OFDM system ($N_{F}=64$) is depicted versus the average SNR of direct link, $\Gamma_0$, for several power allocation schemes under long-term individual power constraints. The outage probability is defined as the probability that the instantaneous rate falls below 1 bit/sec/Hz. In Fig. \[fig44\], the average rate of 2-hop OFDM system when iterative PA algorithms in both exact and asymptotic forms are applied (under LTTPC) is depicted. In this figure, the index $i$ stands for the number of iterations considered, while $i=0$ refers to the uniform power allocation scenario. As an interesting observation, from this figure we can see that after a few iterations the average rate converges to its maximum value. We also note that from a complexity perspective, the computational complexity of proposed iterative algorithms is directly related to the complexity of sub-problems in each iteration. As an example, for the 2-hop OFDM system considered in Fig. \[fig44\], the complexity of IT-ASY and IT-EXA algorithms may be easily related to the complexity of solving PA sub-problems among subchannels and relays (the corresponding waterfilling solutions for these sub-problems are presented in Section III and IV). In particular, the complexity of waterfilling solutions has been already investigated in [@compx]. From Fig. \[fig2\]-\[fig4\] we make the following observations: (i) The IT-EXA scheme provides the best performance among other methods at the cost of a higher computational complexity; (ii) The IT-ASY scheme provides a data rate performance which is very close to that of the IT-EXA scheme, while it enjoys a considerable lower complexity; (iii) The ASY scheme provides an acceptable level of data rate performance due to its considerable lower complexity and easier implementation as it needs channel statistics instead of CSI; (iv) The performance of the IT-ASY scheme in high SNR regime converges to that of the IT-EXA scheme, as it is verified in Fig. \[fig35\] and Fig.\[fig4\]; (iv) For a multi-hop OFDM scenario, the proposed power allocation schemes greatly outperform the scheme with uniform power allocation. ![Average rate of 2-hop OFDM ($N_{F}=64$) relaying system under LTIPC. []{data-label="fig2"}](C_2h_IPC.eps){width="7in"} ![Average rate of 3-hop OFDM ($N_{F}=64$) relaying system under STPC. []{data-label="fig3"}](C_3h_SPC.eps){width="7in"} ![Average rate of 2-hop OFDM ($N_{F}=8$) relaying system under LTTPC. []{data-label="fig35"}](C_2h_LPC.eps){width="7in"} ![Outage probability of 2-hop OFDM ($N_{F}=64$) relaying system under LTIPC. []{data-label="fig4"}](O_2h_IPC.eps){width="7in"} ![Convergence of iterative algorithm (exact and asymptotic) for PA in 2-hop OFDM ($N_{F}=64$) relaying system. $i$ stands for the number of iterations. []{data-label="fig44"}](conv.eps){width="7in"} Conclusion ========== In this paper we considered the problem of power allocation in narrowband and broadband (OFDM) multi-hop relaying systems employing non-regenerative relays with different power constraints. We proposed exact and approximate design approaches depend on the wireless application demand and network structure. In particular, aiming at maximizing the instantaneous multi-hop transmission rate, several power allocation algorithms have been developed in a unified framework including: (i) an iterative power allocation method which provides an upper-bound performance; (ii) a relatively low-complexity iterative power allocation method; and (iii) a non-iterative power allocation scheme with acceptable performance at high SNR regime. Moreover, we provided performance comparison with respect to an equal power PA solution and quantify the rate performance loss incurred at the price of low complexity and low feedback overhead. {#ap1conv} Here, we prove that the objective function in is convex. Let denote this function by f($\beta_k$) as follows $$\begin{array}{l} \text{f}(\beta_k)=\sum_{k=1}^{N-1}{\ln(1+\frac{1}{\beta_k\gamma_{k+1}})}. \end{array}$$ We can easily obtain $$\label{43} \begin{array}{l} \frac{{\partial}^2 \text{f}(\beta_k)}{{\partial \beta_k}^2}=\frac{1+2\beta_k \gamma_{k+1}}{{(\beta_k(\beta_k\gamma_{k+1}+1))}^2}. \end{array}$$ Since the coefficient $\beta_k$ is between 0 and 1 (see Section II), and $\gamma_{k+1}$ takes positive values, the second derivative in is positive for any channel realization, and f($\beta_k$) is convex. {#ap2conc} Here, we show that the objective function in is concave. Let f($\mu_i$) denote the objective function, that is $$\begin{array}{l} \text{f}(\mu_i)=\sum_{k=1}^{N_{F}}{\log\left(1+\frac{\mu_i P_{T}}{N\sqrt{\pi}\sum_{k=1}^{N}{\frac{{N_0}_{k,i}}{{\lverta_{k,i}\rvert}\sigma_{k,i}}}}\right)}. \end{array}$$ We rewrite this function as follows $$\begin{array}{l} \text{f}(\mu_i)=\sum_{k=1}^{N_{F}}{\log\left(1+\mu_i g_i\right)} \end{array}$$ where $$\label{g} \begin{array}{l} \text{g}_i=\frac{P_{T}}{N\sqrt{\pi}\sum_{k=1}^{N}{\frac{{N_0}_{k,i}}{{\lverta_{k,i}\rvert}\sigma_{k,i}}}}. \end{array}$$ Taking the second derivative with respect to $\beta_k$, one obtains $$\label{47} \begin{array}{l} \frac{{\partial}^2 \text{f}(\mu_i)}{{\partial \mu_i}^2}=-\frac{\log(e){g_i}^2}{{1+\mu_i g_i}^2} \end{array}$$ As stated in Section II, $\mu_i$ is between 0 and 1 for any channel realization. Hence, the second derivative in is negative for any channel realization, and the function f($\mu_i$) is concave. Acknowledgment {#acknowledgment .unnumbered} ============== A. Azari would like to thank H. Khodakarami, R. Hemmati, A. Behnad, and R. Parseh for their helpful discussions on this work.
{% load graph_tags %} <script type="text/javascript"> $(function () { $('#{{ chart.get_html_id }}').highcharts({ chart: {{chart.get_chart_json | safe}}, title: {{chart.get_title_json | safe}}, subtitle: {{chart.get_subtitle_json | safe}}, xAxis: {{chart.get_x_axis_json | safe}}, yAxis: {{chart.get_y_axis_json | safe}}, series: {{ chart.get_series_json|safe }}, tooltip: {{chart.get_tooltip_json|safe}}, plotOptions: {{chart.get_options_json | safe}} }); }); </script>
Q: QnAMaker Bot with LUIS Bot merge Okay so I have the LUIS Bot to kickoff the conversation in my Post method in MessageController.cs await Conversation.SendAsync(activity, () => new LUISDialog()); when the bot detects the None intent it will make a call to the QnA bot and forward the message to it await context.Forward(new QnABot(), Whatever, result.Query, CancellationToken.None); Here's my problem: When the QnA bot is started, method MessageReceivedAsync in QnAMakerDialog.cs class is throwing an exception on the parameter "IAwaitable<.IMessageActivity> argument" [Microsoft.Bot.Builder.Internals.Fibers.InvalidTypeException] = {"invalid type: expected Microsoft.Bot.Connector.IMessageActivity, have String"}" when trying to access it via --> var message = await argument; I don't understand what the problem is, I'm typing a simple plain text in the qna bot, and my knowledge base has no problem returning a response when I tried it on the website. I'm not sure what's happening between the time StartAsync is called and MessageReceivedAsync is called that is causing the parameter 'argument' to fail. A: I think the problem is that you are sending a string (result.Query) and the QnAMakerDialog.cs is expecting an IMessageActivity. Try updating your context.Forward call to: var msg = context.MakeMessage(); msg.Text = result.Query; await context.Forward(new QnABot(), Whatever, msg, CancellationToken.None); Alternatively, you can update the signature of the None intent method to include the original IMessageActivity: [LuistIntent("None"))] public async Task None(IDialogContext context, IAwaitable<IMessageActivity> activity, LuisResult result) { var msg = await activity; await context.Forward(new QnABot(), Whatever, msg, CancellationToken.None); }
Q: Partitioning an infinite set into two equinumerous subsets Let $X$ be an infinite set. Then there exists a partition $\lbrace A, B \rbrace$ of $X$ such that $A, B, X$ are equinumerous. Can you prove it in $\mathrm{ZFC}$? Can you prove it in $\mathrm{ZF}$? A: We can prove it in $\operatorname{ZFC}$, but not in $\operatorname{ZF}$: In $\operatorname{ZFC}$ there is some initial ordinal $\kappa$ and a bijecion $f \colon \kappa \to X$ and also some bijection $g \colon \kappa \times 2 \to \kappa$ (the function $g$ exists also if we drop choice, but $f$ may not exist). Let $h = f \circ g$, $A = \{ h(\xi, 0) \mid \xi < \kappa \}$ and $B = \{ h(\xi, 1) \mid \xi < \kappa \}$. Clearly $A \cong B \cong X \cong \kappa$. On the other hand, if we drop choice, there may be some infinite, but Dedekind finite set $X$ and those sets don't even allow for a proper subset $A$ that is in bijection with $X$.
๏ปฟActive Record ============= ืœืžืจื•ืช ืฉื” DAO ืฉืœ Yii ื™ื›ื•ืœ ืœื˜ืคืœ ื‘ื›ืžืขื˜ ื›ืœ ืคืขื•ืœื” ื”ืงืฉื•ืจื” ืœืžืกื“ ื”ื ืชื•ื ื™ื, ืจื•ื‘ ื”ืกื™ื›ื•ื™ื™ื ืฉืื ื• ื ื‘ื–ื‘ื– 90% ืžื”ื–ืžืŸ ืฉืœื ื• ื‘ื›ืชื™ื‘ืช ืฉืื™ืœืชื•ืช SQL ืืฉืจ ืžื‘ืฆืขื•ืช ืืช ื”ืคืขื•ืœื•ืช ื”ื ืคื•ืฆื•ืช ื‘ืžืกื“ ื”ื ืชื•ื ื™ื CRUD (ื™ืฆื™ืจื”, ืงืจื™ืื”, ืขื“ื›ื•ืŸ ื•ืžื—ื™ืงื”). ื’ื ืงืฉื” ืœื ื”ืœ ืืช ื”ืงื•ื“ ืฉืื ื• ื›ื•ืชื‘ื™ื ื›ืฉื”ื•ื ืžืขื•ืจื‘ื‘ ืขื ืฉืื™ืœืชื•ืช SQL ืฉื•ื ื•ืช. ื›ื“ื™ ืœืคืชื•ืจ ื‘ืขื™ื•ืช ืืœื• ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ Active Record. Active Record ืื• ื‘ืงื™ืฆื•ืจ AR ื”ื™ื ื” ืฉื™ื˜ืช ORM (Object-Relational Mapping) ื ืคื•ืฆื”. ื›ืœ ืžื—ืœืงืช AR ืžื™ื™ืฆื’ืช ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ืืฉืจ ื”ืชื›ื•ื ื•ืช ืฉืœื” ืžืื•ืคื™ื™ื ื•ืช ื›ืžืฉืชื ื™ื ื‘ืžื—ืœืงืช ื” AR, ื•ืื•ื‘ื™ื™ืงื˜ ืฉืœ ื”ืžื—ืœืงื” ืžื™ื™ืฆื’ ืฉื•ืจื” ืื—ืช ื‘ื˜ื‘ืœื”. ืคืขื•ืœื•ืช CRUD ื ืคื•ืฆื•ืช ืžื™ื•ืฉืžื•ืช ื‘ืชื•ืจ ืžืชื•ื“ื•ืช ื‘ืžื—ืœืงืช ื” AR. ื›ืชื•ืฆืื” ืžื›ืš, ืื ื• ื™ื›ื•ืœื™ื ืœื’ืฉืช ืœืžื™ื“ืข ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ื‘ืฆื•ืจื” ื™ื•ืชืจ ืžื•ื ื—ื™ืช-ืขืฆืžื™ื. ืœื“ื•ื’ืžื, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ืงื•ื“ ื”ื‘ื ื›ื“ื™ ืœื”ื›ื ื™ืก ืฉื•ืจื” ื—ื“ืฉื” ื‘ื˜ื‘ืœื” `tbl_post`: ~~~ [php] $post=new Post; $post-ยปtitle='ื›ื•ืชืจืช ืœื“ื•ื’ืžื'; $post-ยปcontent='ืชื•ื›ืŸ ื”ื•ื“ืขื” ืœื“ื•ื’ืžื'; $post-ยปsave(); ~~~ ื‘ืžืกืžืš ื–ื” ืื ื• ื ืชืืจ ื›ื™ืฆื“ ืœื”ื’ื“ื™ืจ ืžื—ืœืงืช AR ื•ืœื”ืฉืชืžืฉ ื‘ื” ื›ื“ื™ ืœื‘ืฆืข ืคืขื•ืœื•ืช CRUD. ื‘ื—ืœืง ื”ื‘ื ืื ื• ื ืฆื™ื’ ื›ื™ืฆื“ ืœื”ืฉืชืžืฉ ื‘ AR ื›ื“ื™ ืœื˜ืคืœ ื‘ืงืฉืจื™ื ื‘ื™ืŸ ื˜ื‘ืœืื•ืช ื‘ืžืกื“ ื”ื ืชื•ื ื™ื. ืœื ื•ื—ื•ืช, ืื ื• ื ืฉืชืžืฉ ื‘ื˜ื‘ืœื” ื”ื‘ืื” ืœืžื˜ืจืช ื”ืฆื’ืช ื”ื“ื•ื’ืžืื•ืช ื‘ื—ืœืง ื–ื”. ื”ืขืจื”: ื‘ืžื™ื“ื” ื•ื”ื™ื ืš ืขื•ื‘ื“ ืขื ืžืกื“ ื ืชื•ื ื™ื ืžืกื•ื’ MySQL, ื™ืฉ ืœื”ื—ืœื™ืฃ ืืช `AUTOINCREMENT` ื‘ `AUTO_INCREMENT` ื‘ืฉืื™ืœืชื” ื”ื‘ืื”. ~~~ [sql] CREATE TABLE tbl_post ( id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, title VARCHAR(128) NOT NULL, content TEXT NOT NULL, create_time INTEGER NOT NULL ); ~~~ ยป Note|ื”ืขืจื”: ืฉื™ืžื•ืฉ ื‘-AR ืœื ื ื•ืขื“ ื›ื“ื™ ืœืคืชื•ืจ ืืช ื›ืœ ื”ื‘ืขื™ื•ืช ื”ืงืฉื•ืจื•ืช ืœืžืกื“ื™ ื”ื ืชื•ื ื™ื. ื”ืฉื™ืžื•ืฉ ื”ื˜ื•ื‘ ื‘ื™ื•ืชืจ ื‘ื• ื ื•ืขื“ ืขืœ ืžื ืช ืœื‘ืฆืข ืฉืื™ืœื•ืช SQL ืคืฉื•ื˜ื•ืช (CRUD) ื•ืฉื™ืžื•ืฉ ื‘ืชื•ืจ ืžื•ื“ืœ ื‘ืืคืœื™ืงืฆื™ื”. ืžื•ืžืœืฅ ืœื ืœื”ืฉืชืžืฉ ื‘ื• ื‘ืขืช ื‘ื™ืฆื•ืข ืฉืื™ืœืชื•ืช SQL ืžื•ืจื›ื‘ื•ืช. ืœืžื˜ืจื” ื–ื• ืžื•ืžืœืฅ ืœื”ืฉืชืžืฉ ื‘ DAO. ื”ืชื—ื‘ืจื•ืช ืœืžืกื“ ื”ื ืชื•ื ื™ื -------------------------- AR ืžืฉืชืžืš ืขืœ ื—ื™ื‘ื•ืจ ืœืžืกื“ ื”ื ืชื•ื ื™ื ื›ื“ื™ ืœื‘ืฆืข ืคืขื•ืœื•ืช ื”ืงืฉื•ืจื•ืช ืืœื™ื•. ื›ื‘ืจื™ืจืช ืžื—ื“ืœ, AR ืžื ื™ื— ืฉื”ืจื›ื™ื‘ `db` ืžืกืคืง ืื•ื‘ื™ื™ืงื˜ [CDbConnection] ื”ืžืืคืฉืจ ื”ืชื—ื‘ืจื•ืช ืœืžืกื“ ื”ื ืชื•ื ื™ื. ื”ื’ื“ืจื•ืช ื”ืืคืœื™ืงืฆื™ื” ื”ื‘ืื•ืช ืžืฆื™ื’ื•ืช ื“ื•ื’ืžื ืœืฉื™ืžื•ืฉ: ~~~ [php] return array( 'components'=ยปarray( 'db'=ยปarray( 'class'=ยป'system.db.CDbConnection', 'connectionString'=ยป'sqlite:path/to/dbfile', // ื”ืคืขืœืช ืžื˜ืžื•ืŸ ืœื”ืคื—ืชืช ืขื•ืžืก // 'schemaCachingDuration'=ยป3600, ), ), ); ~~~ ยป Tip|ื˜ื™ืค: ืžืื—ืจ ื•ื” AR ืžืกืชืžืš ืขืœ ื” metadata ืฉืœ ื”ื˜ื‘ืœืื•ืช ื›ื“ื™ ืœื”ื—ืœื™ื˜ ืœื’ื‘ื™ ืกื•ื’ื™ ื”ืขืžื•ื“ื•ืช ื‘ื˜ื‘ืœืื•ืช, ืœื•ืงื— ื–ืžืŸ ืœืงืจื•ื ืžื™ื“ืข ื–ื” ืื•ื“ื•ืช ื”ื˜ื‘ืœืื•ืช ื•ืœื ืชื— ืื•ืชื•. ื‘ืžื™ื“ื” ื•ืชืจืฉื™ื ื˜ื‘ืœืื•ืช ืžืกื“ื™ ื”ื ืชื•ื ื™ื ืฉืœืš ืœื ืžืฉืชื ื”, ืžื•ืžืœืฅ ืœื”ืคืขื™ืœ ืืช ื”ืžื˜ืžื•ืŸ ืœื’ื‘ื™ ื”ื ืชื•ื ื™ื ืื•ื“ื•ืช ื”ื˜ื‘ืœืื•ืช ืขืœ ื™ื“ื™ ื”ื’ื“ืจืช ื”ืžืืคื™ื™ืŸ [CDbConnection::schemaCachingDuration] ืœืขืจืš ื”ื’ื“ื•ืœ ืž-0. ืชืžื™ื›ื” ื‘ AR ืžื•ื’ื‘ืœืช ืขืœ ืคื™ DBMS. ื ื›ื•ืŸ ืœืขื›ืฉื™ื•, ื”-DBMS ื”ื‘ืื™ื ื ืชืžื›ื™ื: - [MySQL 4.1 ืื• ื™ื•ืชืจ](http://www.mysql.com) - [PostgreSQL 7.3 ืื• ื™ื•ืชืจ](http://www.postgres.com) - [Microsoft SQL Server 2000 ืื• ื™ื•ืชืจ](http://www.microsoft.com/sqlserver/) - [Oracle](http://www.oracle.com) ยป Note|ื”ืขืจื”: ื”ืชืžื™ื›ื” ื‘ SQLServer ืงื™ื™ืžืช ืžื’ืจืกืื•ืช 1.0.4 ื•ืžืขืœื”. ื•ื”ืชืžื™ื›ื” ืฉืœ Oracle ืงื™ื™ืžืช ืžื’ืจืกืื•ืช 1.0.5 ื•ืžืขืœื”. ืื ื‘ืจืฆื•ื ืš ืœื”ืฉืชืžืฉ ื‘ืจื›ื™ื‘ ืืคืœื™ืงืฆื™ื” ืฉื”ื•ื ืœื `db`, ืื• ืื ื‘ืจืฆื•ื ืš ืœืขื‘ื•ื“ ืขื ื›ืžื” ืžืกื“ื™ ื ืชื•ื ื™ื ื‘ืขื–ืจืช ื” AR, ืจืฆื•ื™ ืฉืชื“ืจื•ืก ืืช [CActiveRecord::getDbConnection]. ื”ืžื—ืœืงื” [CActiveRecord] ื”ื™ื ื” ืžื—ืœืงืช ื”ื‘ืกื™ืก ืฉืœ ื›ืœ ืžื—ืœืงื•ืช ื” AR. ยป Tip|ื˜ื™ืค: ื™ืฉื ื ืฉื ื™ ื“ืจื›ื™ื ืœืขื‘ื•ื“ ืขื ื›ืžื” ืžืกื“ื™ ื ืชื•ื ื™ื ืชื•ืš ืฉื™ืžื•ืฉ ื‘-AR . ืื ืชืจืฉื™ื ืžืกื“ื™ ื”ื ืชื•ื ื™ื ื”ื•ื ืฉื•ื ื”, ืชื•ื›ืœ ืœื™ืฆื•ืจ ืžื—ืœืงืช AR ื‘ืกื™ืกื™ืช ืฉื•ื ื” ืืฉืจ ื‘ืชื•ื›ื” ื™ื”ื™ื” ืฆื•ืจืš ืœื“ืจื•ืก ืืช [getDbConnection|CActiveRecord::getDbConnection] ื‘ืฆื•ืจื” ืฉื•ื ื”. ื“ืจืš ื ื•ืกืคืช ืชื™ื”ื™ื” ืœืฉื ื•ืช ืืช ื”ืžืฉืชื ื” ื”ืกื˜ื˜ื™ [CActiveRecord::db] ื‘ืฆื•ืจื” ื“ื™ื ืืžื™ืช ื‘ืขืช ื”ืฆื•ืจืš. ื”ื’ื“ืจืช ืžื—ืœืงืช AR ----------------- ื›ื“ื™ ืœื’ืฉืช ืœื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื, ืื ื• ืงื•ื“ื ืฆืจื™ื›ื™ื ืœื”ื’ื“ื™ืจ ืžื—ืœืงืช AR ื”ื™ื•ืจืฉืช ืž [CActiveRecord]. ื›ืœ ืžื—ืœืงืช AR ืžื™ื™ืฆื’ืช ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื, ื•ืื•ื‘ื™ื™ืงื˜ ืฉืœ ื”ืžื—ืœืงื” ืžื™ื™ืฆื’ ืจืฉื•ืžื” ื‘ืื•ืชื” ื˜ื‘ืœื”. ื”ื“ื•ื’ืžื ื”ื‘ืื” ืžืฆื™ื’ื” ืืช ื”ืงื•ื“ ื”ืžื™ื ื™ืžืœื™ ื”ื“ืจื•ืฉ ืœืžื—ืœืงืช ื” AR ื”ืžื™ื™ืฆื’ืช ืืช ื”ื˜ื‘ืœื” `tbl_post`. ~~~ [php] class Post extends CActiveRecord { public static function model($className=__CLASS__) { return parent::model($className); } public function tableName() { return 'tbl_post'; } } ~~~ ยป Tip|ื˜ื™ืค: ืžืื—ืจ ื•ื ื™ืชืŸ ืœืงืจื•ื ืœืžื—ืœืงื•ืช AR ื‘ืžืงื•ืžื•ืช ืจื‘ื™ื, ื ื™ืชืŸ ืœื™ื™ื‘ื ืืช ื›ืœ ื”ืชื™ืงื™ื” ื”ืžื›ื™ืœื” ืืช ืžื—ืœืงืช ื” AR, ื‘ืžืงื•ื ืœื™ื™ื‘ื ืื•ืชื ืื—ื“ ืื—ื“. ืœื“ื•ื’ืžื, ืื ื›ืœ ืžื—ืœืงื•ืช ื” AR ื ืžืฆืื•ืช ืชื—ืช ื”ืชื™ืงื™ื” `protected/models`, ื ื™ืชืŸ ืœื”ื’ื“ื™ืจ ืืช ื”ืืคืœื™ืงืฆื™ื” ื‘ืฆื•ืจื” ื”ื‘ืื”: ยป ~~~ ยป [php] ยป return array( ยป 'import'=ยปarray( ยป 'application.models.*', ยป ), ยป ); ยป ~~~ ื›ื‘ืจื™ืจืช ืžื—ื“ืœ, ืฉื ืžื—ืœืงืช ื” AR ื”ื•ื ื–ื”ื” ืœืฉื ืฉืœ ื”ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื. ื™ืฉ ืœื“ืจื•ืก ืืช ื”ืžืชื•ื“ื” [tableName|CActiveRecord::tableName] ื‘ืžื™ื“ื” ื•ื”ื ืฉื•ื ื™ื ืื—ื“ ืžื”ืฉื ื™. ื”ืžืชื•ื“ื” [model|CActiveRecord::model] ืžื•ื’ื“ืจืช ื‘ืฆื•ืจื” ื”ื–ืืช ืœื›ืœ ืžื—ืœืงืช AR (ื”ืกื‘ืจ ื‘ื”ืžืฉืš). ยป Info|ืžื™ื“ืข: ื›ื“ื™ ืœื”ืฉืชืžืฉ ื‘ืืคืฉืจื•ืช [ืงื™ื“ื•ืžืช ื˜ื‘ืœืื•ืช](/doc/guide/database.dao#using-table-prefix) ื”ืงื™ื™ืžืช ืžื’ืจืกืื•ืช 1.1.0, ื”ืžืชื•ื“ื” [tableName|CActiveRecord::tableName] ืฉืœ ืžื—ืœืงืช AR ืฆืจื™ื›ื” ืœื”ื“ืจืก ื‘ืฆื•ืจื” ื”ื‘ืื”, ยป ~~~ ยป [php] ยป public function tableName() ยป { ยป return '{{post}}'; ยป } ยป ~~~ ื–ืืช, ื‘ืžืงื•ื ืœื”ื—ื–ื™ืจ ืืช ืฉืžื” ื”ืžืœื ืฉืœ ื”ื˜ื‘ืœื”, ืื ื• ืžื—ื–ื™ืจื™ื ืืช ืฉื ื”ื˜ื‘ืœื” ืœืœื ื”ืงื™ื“ื•ืžืช ื•ืขื•ื˜ืคื™ื ืื•ืชื” ื‘ืกื•ื’ืจื™ื™ื ืžืกื•ืœืกืœื•ืช ื›ืคื•ืœื•ืช. ื ื™ืชืŸ ืœื’ืฉืช ืœืขืจื›ื™ื ื‘ืขืžื•ื“ื•ืช ืฉืœ ื˜ื‘ืœื” ื›ืžืืคื™ื™ื ื™ื ืฉืœ ืžื—ืœืงืช ื” AR. ืœื“ื•ื’ืžื, ื”ืงื•ื“ ื”ื‘ื ืžื’ื“ื™ืจ ืืช ื”ืขืžื•ื“ื” (ืžืืคื™ื™ืŸ) `title`: ~~~ [php] $post=new Post; $post-ยปtitle='ื”ื•ื“ืขื” ืœื“ื•ื’ืžื'; ~~~ ืœืžืจื•ืช ืฉืื ื• ืœื ืžื’ื“ื™ืจื™ื ืกืคืฆื™ืคื™ืช ืืช ื”ืžืืคื™ื™ืŸ `title` ื‘ืžื—ืœืงืช ื” `Post`, ืื ื• ืขื“ื™ื™ืŸ ื™ื›ื•ืœื™ื ืœื’ืฉืช ืืœื™ื• ื‘ืงื•ื“ ื”ืžื•ืฆื’ ืœืžืขืœื”. ื”ืกื™ื‘ื” ืœื›ืš ื”ื™ื ืžื›ื™ื•ื•ืŸ ืฉ `title` ื”ื™ื ื• ืขืžื•ื“ื” ื‘ื˜ื‘ืœื” `tbl_post` , ื•ืžื—ืœืงืช ื” AR ืžืืคืฉืจืช ื’ื™ืฉื” ืืœื™ื• ื›ืžืืคื™ื™ืŸ ื‘ืžื—ืœืงื” ื‘ืขื–ืจืช ืคื•ื ืงืฆื™ืช ื” `__get` ื‘ PHP. ืชื–ืจืง ืฉื’ื™ืื” ื‘ืžื™ื“ื” ื•ื™ื”ื™ื” ื ื™ืกื™ื•ืŸ ืœื’ืฉืช ืœืขืžื•ื“ื” ืืฉืจ ืœื ืงื™ื™ืžืช ื‘ื˜ื‘ืœื”. ยป Info|ืžื™ื“ืข: ื‘ืžื“ืจื™ืš ื–ื”, ืื ื• ืžืฉืชืžืฉื™ื ื‘ืื•ืชื™ื•ืช ืงื˜ื ื•ืช ืœื›ืœ ืฉืžื•ืช ื”ื˜ื‘ืœืื•ืช ื•ื”ืขืžื•ื“ื•ืช. ื–ืืช ืžืื—ืจ ื•ื›ืœ DBMS ืจื’ื™ืฉ ื‘ืื•ืคืŸ ืฉื•ื ื” ืœืื•ืชื™ื•ืช ื’ื“ื•ืœื•ืช-ืงื˜ื ื•ืช. ืœื“ื•ื’ืžื, PostgreSQL ืœื ืจื’ื™ืฉ ืœืื•ืชื™ื•ืช ื’ื“ื•ืœื•ืช-ืงื˜ื ื•ืช ืœืฉืžื•ืช ืขืžื•ื“ื•ืช ื‘ื˜ื‘ืœืื•ืช ื›ื‘ืจื™ืจืช ืžื—ื“ืœ, ื•ืื ื• ื—ื™ื™ื‘ื™ื ืœืชื—ื•ื ืืช ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ื‘ืชื ืื™ื ื‘ืชื•ืš ื”ืฉืื™ืœืชื” ื‘ืžื™ื“ื” ื•ืฉื ื”ืขืžื•ื“ื” ืžื›ื™ืœ ืฉื™ืœื•ื‘ ืฉืœ ืื•ืชื™ื•ืช ื’ื“ื•ืœื•ืช ื•ืงื˜ื ื•ืช. ืฉื™ืžื•ืฉ ื‘ืื•ืชื™ื•ืช ืงื˜ื ื•ืช ื‘ืœื‘ื“ ื‘ืฉืžื•ืช ืคื•ืชืจ ื‘ืขื™ื” ื–ื•. ืžื—ืœืงืช AR ืžืกืชืžื›ืช ืขืœ ืžืคืชื—ื•ืช ืจืืฉื™ื™ื ื”ืžื•ื’ื“ืจื™ื ื‘ื˜ื‘ืœืื•ืช. ื‘ืžื™ื“ื” ื•ื˜ื‘ืœื” ืœื ืžื›ื™ืœื” ืžืคืชื— ืจืืฉื™, ื™ืฉ ืฆื•ืจืš ื‘ื”ื’ื“ืจืช ื”ืžืคืชื— ื”ืจืืฉื™ ื‘ืžื—ืœืงื” ืขืœ ื™ื“ื™ ื“ืจื™ืกื” ืฉืœ ื”ืžืชื•ื“ื” `primaryKey` ื›ืคื™ ืฉืžื•ืฆื’ ื‘ื“ื•ื’ืžื ื”ื‘ืื”, ~~~ [php] public function primaryKey() { return 'id'; // ื‘ืขื‘ื•ืจ ืžืคืชื—ื•ืช ืžื•ืจื›ื‘ื™ื, ื™ืฉ ืœื”ื—ื–ื™ืจ ืžืขืจืš ื‘ืฆื•ืจื” ื”ื‘ืื” // return array('pk1', 'pk2'); } ~~~ ื™ืฆื™ืจืช ืจืฉื•ืžื” --------------- ื‘ื›ื“ื™ ืœื”ื›ื ื™ืก ืจืฉื•ืžื” ื—ื“ืฉื” ืœื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื, ืื ื• ื™ื•ืฆืจื™ื ืื•ื‘ื™ื™ืงื˜ ื—ื“ืฉ ืฉืœ ืžื—ืœืงืช ื” AR, ืžื’ื“ื™ืจื™ื ืืช ื”ืžืืคื™ื™ื ื™ื ืฉืœื” ื‘ื”ืชืื ืœืขืžื•ื“ื•ืช ื‘ื˜ื‘ืœื”, ื•ืงื•ืจืื™ื ืœืžืชื•ื“ื” [save|CActiveRecord::save] ื›ื“ื™ ืœื”ืฉืœื™ื ืืช ืชื”ืœื™ืš ื”ื”ื•ืกืคื”. ~~~ [php] $post=new Post; $post-ยปtitle='ื›ื•ืชืจืช ืœื“ื•ื’ืžื'; $post-ยปcontent='ืชื•ื›ืŸ ืœื”ื•ื“ืขื” ืœื“ื•ื’ืžื'; $post-ยปcreate_time=time(); $post-ยปsave(); ~~~ ื‘ืžื™ื“ื” ื•ื”ืžืคืชื— ื”ืจืืฉื™ ืฉืœ ื”ื˜ื‘ืœื” ื”ื™ื ื• ืขืจืš ืžืกืคืจื™ ืืฉืจ ืขื•ืœื” ืื•ื˜ื•ืžื˜ื™ืช (ืžื•ื’ื“ืจ ื› auto_increment), ืœืื—ืจ ื”ื”ื•ืกืคื” ืœื˜ื‘ืœื” ืžื—ืœืงืช ื” AR ืชื›ื™ืœ ืžืคืชื— ืจืืฉื™ ืžืขื•ื“ื›ืŸ. ื‘ื“ื•ื’ืžื ืœืžืขืœื”, ื”ืžืืคื™ื™ืŸ `id` ืžืฉืงืฃ ืืช ื”ืžืคืชื— ื”ืจืืฉื™ ืฉืœ ื”ื”ื•ื“ืขื” ื”ื—ื“ืฉื” ืฉื ื•ืฆืจื” ื›ืจื’ืข, ืœืžืจื•ืช ืฉืื ื• ืœื ืžืฉื ื™ื ืื•ืชื• ื‘ืฆื•ืจื” ืกืคืฆื™ืคื™ืช. ื‘ืžื™ื“ื” ื•ื™ืฉื ื• ืขืจืš ื‘ืจื™ืจืช ืžื—ื“ืœ ืœืขืžื•ื“ื” ื‘ื˜ื‘ืœื” (ืœื“ื•ื’ืžื ืกื˜ืจื™ื ื’ ืื• ืžืกืคืจ), ื”ืžืืคื™ื™ืŸ ื‘ืžื—ืœืงืช ื” AR ื™ื›ื™ืœ ืืช ืื•ืชื• ืขืจืš ื‘ืจื™ืจืช ืžื—ื“ืœ ืœืื—ืจ ื™ืฆื™ืจืช ื”ืจืฉื•ืžื” ื”ื—ื“ืฉื”. ื“ืจืš ืื—ืช ืœืฉื ื•ืช ืขืจืš ื‘ืจื™ืจืช ืžื—ื“ืœ ื–ื” ื”ื™ื ืขืœ ื™ื“ื™ ื”ื’ื“ืจืช ื”ืžืืคื™ื™ืŸ ื‘ืžื—ืœืงืช ื” AR ืขื ืขืจืš ื‘ืจื™ืจืช ืžื—ื“ืœ ื—ื“ืฉ: ~~~ [php] class Post extends CActiveRecord { public $title='ืื ื ื”ื–ืŸ ื›ื•ืชืจืช'; ...... } $post=new Post; echo $post-ยปtitle; // ื–ื” ื™ืฆื™ื’: ืื ื ื”ื–ืŸ ื›ื•ืชืจืช ~~~ ื”ื—ืœ ืžื’ืจืกื 1.0.2, ื ื™ืชืŸ ืœื”ื’ื“ื™ืจ ืขืจืš ืฉืœ [CDbExpression] ืœืžืืคื™ื™ืŸ ื›ืœืฉื”ื• ืœืคื ื™ ืฉืžื™ืจืช ื”ืจืฉื•ืžื” (ื‘ื™ืŸ ืื ื–ื” ื™ืฆื™ืจืช ืจืฉื•ืžื” ื—ื“ืฉื” ืื• ืขื“ื›ื•ืŸ ืจืฉื•ืžื” ืงื™ื™ืžืช) ื‘ืžืกื“ ื”ื ืชื•ื ื™ื. ืœื“ื•ื’ืžื, ื‘ื›ื“ื™ ืœืฉืžื•ืจ ืืช ื”ื–ืžืŸ ื”ื ื•ื›ื—ื™ ื”ืžื•ื—ื–ืจ ื‘ืืžืฆืขื•ืช ื”ืคื•ื ืงืฆื™ื” `()NOW` ื‘ MySQL, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ืงื•ื“ ื”ื‘ื: ~~~ [php] $post=new Post; $post-ยปcreate_time=new CDbExpression('NOW()'); // $post-ยปcreate_time='NOW()'; ืœื ื™ืขื‘ื•ื“ ืžืื—ืจ // 'NOW()' ื™ื”ื™ื” ื›ืกื˜ืจื™ื ื’ ื•ืœื ื›ืคื•ื ืงืฆื™ื” $post-ยปsave(); ~~~ ยป Tip|ื˜ื™ืค: ืืฃ ืขืœ ืคื™ ืฉ AR ืžืืคืฉืจ ืœื ื• ืœื‘ืฆืข ืคืขื•ืœื•ืช ื”ืงืฉื•ืจื•ืช ืœืžืกื“ ื”ื ืชื•ื ื™ื ืœืœื ืฆื•ืจืš ื‘ื›ืชื™ื‘ื” ืฉืœ ืฉืื™ืœืชื•ืช SQL ืžืกื•ื‘ื›ื•ืช, ืื ื• ื‘ื“ืจืš ื›ืœืœ ื ืจืฆื” ืœื“ืขืช ืื™ืœื• ืฉืื™ืœืชื•ืช SQL ืจืฆื•ืช ืžืื—ื•ืจื™ ื” AR. ื ื™ืชืŸ ืœืงื‘ืœ ืžื™ื“ืข ื–ื” ืขืœ ื™ื“ื™ ื”ืคืขืœืช ืืคืฉืจื•ืช [ื”ืชื™ืขื•ื“](/doc/guide/topics.logging) ืฉืœ Yii. ืœื“ื•ื’ืžื, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืคืขื™ืœ ืืช [CWebLogRoute] ื‘ื”ื’ื“ืจื•ืช ื”ืืคืœื™ืงืฆื™ื”, ื•ืื ื• ื ืจืื” ืืช ื”ืฉืื™ืœืชื•ืช ืฉื‘ื•ืฆืขื• ื‘ืกื•ืฃ ื›ืœ ืขืžื•ื“. ืžื’ืจืกื 1.0.5, ืื ื• ื™ื›ื•ืœื™ื ืœื”ื’ื“ื™ืจ ืืช ื”ืžืืคื™ื™ืŸ [CDbConnection::enableParamLogging] ืœืขืจืš ื”ืฉื•ื•ื” ืœ `true` ื‘ื”ื’ื“ืจื•ืช ื”ืืคืœื™ืงืฆื™ื” ื›ื“ื™ ืฉื”ืคืจืžื˜ืจื™ื ื”ืชื—ื•ืžื™ื ื‘ืฉืื™ืœืชื” ื™ื•ืฆื’ื• ื’ื ื”ื ื‘ืชื™ืขื•ื“. ืงืจื™ืืช ืจืฉื•ืžื” -------------- ื‘ื›ื“ื™ ืœืงืจื•ื ืžื™ื“ืข ืžื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื, ืื ื• ืงื•ืจืื™ื ืœืื—ืช ืžืžืชื•ื“ื•ืช ื” `find` ื”ื‘ืื•ืช. ~~~ [php] // ืžืฆื ืืช ื”ืจืฉื•ืžื” ื”ืจืืฉื•ื ื” ื”ืžืฉื‘ื™ืข ืืช ื”ืชื ืื™ ืฉื”ื•ืขื‘ืจ $post=Post::model()-ยปfind($condition,$params); // ืžืฆื ืืช ื”ืจืฉื•ืžื” ืขื ื”ืžืคืชื— ื”ืจืืฉื™ ืฉื”ื•ืขื‘ืจ $post=Post::model()-ยปfindByPk($postID,$condition,$params); // ืžืฆื ืืช ื”ืจืฉื•ืžื” ืขื ื”ืžืืคื™ื™ืŸ ืฉื”ื•ืขื‘ืจ $post=Post::model()-ยปfindByAttributes($attributes,$condition,$params); // ืžืฆื ืืช ื”ืจืฉื•ืžื” ื”ืจืืฉื•ื ื” ืขื ื”ืฉืื™ืœืชื” ืฉื”ื•ืขื‘ืจื” $post=Post::model()-ยปfindBySql($sql,$params); ~~~ ื‘ื“ื•ื’ืžืื•ืช ืœืžืขืœื”, ืื ื• ืงื•ืจืื™ื ืœืžืชื•ื“ื•ืช ื” `find` ื‘ืขื–ืจืช `()Post::model`. ื–ื›ื•ืจ ืฉื”ืžืชื•ื“ื” ื”ืกื˜ื˜ื™ืช `()model` ื”ื™ื ื” ื”ื›ืจื—ื™ืช ืœื›ืœ ืžื—ืœืงืช AR. ื”ืžืชื•ื“ื” ืžื—ื–ื™ืจื” ืื•ื‘ื™ื™ืงื˜ AR ืืฉืจ ืžืฉืชืžืฉื™ื ื‘ื• ืœื’ืฉืช ืœืžืชื•ื“ื•ืช ื‘ืขืœื™ ื”ืจืฉืื” ืœืื•ืชื” ืžื—ืœืงื” ื‘ืœื‘ื“ (ืžืฉื”ื• ื“ื•ืžื” ืœืžืชื•ื“ื•ืช ืกื˜ื˜ื™ื•ืช ื‘ืžื—ืœืงื”) ื‘ืื•ืคืŸ ืžื•ื ื—ื” ืขืฆืžื™ื. ืื ื”ืžืชื•ื“ื” `find` ืžื•ืฆืืช ืจืฉื•ืžื” ื”ืžืฉื‘ื™ืข ืืช ื”ืชื ืื™ื ืฉื”ื•ืขื‘ืจื•, ื”ื™ื ืชื—ื–ื™ืจ ืื•ื‘ื™ื™ืงื˜ ืฉืœ `Post` ื›ืฉื”ืžืืคื™ื™ื ื™ื ืฉืœื” ืžื›ื™ืœื™ื ืืช ื”ืขืจื›ื™ื ืฉืœ ื”ืขืžื•ื“ื•ืช ืฉืœ ืื•ืชื” ืจืฉื•ืžื” ื‘ื˜ื‘ืœื”. ืœืื—ืจ ืžื›ืŸ ืื ื• ื™ื›ื•ืœื™ื ืœืงืจื•ื ืืช ื”ืขืจื›ื™ื ืฉื ื˜ืขื ื• ื‘ืฆื•ืจื” ื”ืจื’ื™ืœื” ื‘ื” ืื ื• ื ื™ื’ืฉื™ื ืœืžืืคื™ื™ื ื™ื ืฉืœ ื”ืžื—ืœืงื”, ืœื“ื•ื’ืžื, `;echo $post-ยปtitle`. ืžืชื•ื“ืช ื” `find` ืชื—ื–ื™ืจ null ื‘ืžื™ื“ื” ื•ืœื ื ืžืฆื ืฉื•ื ื“ื‘ืจ ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ื”ืชื•ืื ืœืชื ืื™ื ืฉื”ื•ืขื‘ืจื•. ื‘ืขืช ื”ืงืจื™ืื” ืœ `find`, ืื ื• ืžืฉืชืžืฉื™ื ื‘ `condition$` ื• `params$` ื›ื“ื™ ืœื”ื’ื“ื™ืจ ืืช ื”ืชื ืื™ื ืฉืœ ื”ืฉืื™ืœืชื”. ื›ืืŸ `condition$` ื™ื›ื•ืœ ืœื”ื•ื•ืช ืกื˜ืจื™ื ื’ ื”ืžื™ื™ืฆื’ ืืช ื”ืกืขื™ืฃ `WHERE` ื‘ืฉืื™ืœืชืช SQL, ื• `params$` ื”ื™ื ื• ืžืขืจืš ืฉืœ ืคืจืžื˜ืจื™ื ืฉืขืจื›ื™ื”ื ืฆืจื™ื›ื™ื ืœื”ืชื—ื ื‘ืžืคืชื—ื•ืช ืฉื”ื•ื’ื“ืจื• ืžืจืืฉ ื‘ `condition$`. ืœื“ื•ื’ืžื, ~~~ [php] // ืžืฆื ืืช ื”ืจืฉื•ืžื” ืื™ืคื” ืฉ postID = 10 $post=Post::model()-ยปfind('postID=:postID', array(':postID'=ยป10)); ~~~ ยป Note|ื”ืขืจื”: ื‘ื“ื•ื’ืžื ืœืžืขืœื”, ืื ื• ื ืฆื˜ืจืš ืœื‘ืฆืข ื—ื™ื˜ื•ื™ ืœื™ื™ื—ื•ืก ืฉืœ ื”ืขืžื•ื“ื” `postID` ื‘ืขื‘ื•ืจ DBMS ืžืกื•ื™ื™ืžื™ื. ืœื“ื•ื’ืžื, ื‘ืžื™ื“ื” ื•ืื ื—ื ื• ืžืฉืชืžืฉื™ื ื‘ PostgreSQL, ืื ื• ื ืฆื˜ืจืš ืœื›ืชื•ื‘ ืืช ื”ืชื ืื™ ื‘ืฆื•ืจื” ื”ื‘ืื” `postID"=:postID"`, ืžืื—ืจ ื• PostgreSQL ื›ื‘ืจื™ืจืช ืžื—ื“ืœ ื™ืชื™ื™ื—ืก ืœืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ืœืœื ืจื’ื™ืฉื•ืช ืœืื•ืชื™ื•ืช ื’ื“ื•ืœื•ืช-ืงื˜ื ื•ืช. ื›ืžื• ื›ืŸ ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ `condition$` ื›ื“ื™ ืœื”ื’ื“ื™ืจ ืชื ืื™ื ืžื•ืจื›ื‘ื™ื ื™ื•ืชืจ. ื‘ืžืงื•ื ืกื˜ืจื™ื ื’, ืื ื• ื ื•ืชื ื™ื ืœ `condition$` ืœื”ื™ื•ืช ืื•ื‘ื™ื™ืงื˜ ืฉืœ [CDbCriteria], ื”ืžืืคืฉืจ ืœื ื• ืœื”ื’ื“ื™ืจ ืชื ืื™ื ื ื•ืกืคื™ื ืžืœื‘ื“ ืกืขื™ืฃ ื” `WHERE`. ืœื“ื•ื’ืžื, ~~~ [php] $criteria=new CDbCriteria; $criteria-ยปselect='title'; // ื‘ื—ืจ ืจืง ืืช ื”ืขืžื•ื“ื” 'title' $criteria-ยปcondition='postID=:postID'; $criteria-ยปparams=array(':postID'=ยป10); $post=Post::model()-ยปfind($criteria); // $params ืื™ื ื• ื ื—ื•ืฅ ื›ืืŸ ~~~ ื–ื›ื•ืจ, ืฉื‘ืขืช ื”ืฉื™ืžื•ืฉ ื‘ [CDbCriteria] ื‘ืชื•ืจ ื”ืชื ืื™ ืฉืœ ื”ืฉืื™ืœืชื”, ื”ืคืจืžื˜ืจ `params$` ืื™ื ื• ื ื—ื•ืฅ ืžืื—ืจ ื•ื ื™ืชืŸ ืœื”ื’ื“ื™ืจ ืื•ืชื• ื‘ืขื–ืจืช [CDbCriteria], ื›ืคื™ ืฉืžื•ืฆื’ ืœืžืขืœื”. ื›ืžื• ื›ืŸ, ื‘ืžืงื•ื ืฉื™ืžื•ืฉ ื‘ [CDbCriteria] ื ื™ืชืŸ ืœื”ืขื‘ื™ืจ ืžืขืจืš ืœืžืชื•ื“ืช ื”-`find`. ืฉืžื•ืช ื”ืžืคืชื—ื•ืช ื•ื”ืขืจื›ื™ื ืžืชื™ื™ื—ืกื•ืช ืœืžืืคื™ื™ื ื™ื ืฉืœ ื”ืชื ืื™ื ื•ืขืจื›ื™ื”ื, ื‘ื”ืชืื. ื ื™ืชืŸ ืœืฉื›ืชื‘ ืืช ื”ื“ื•ื’ืžื ืœืžืขืœื” ื‘ืงื•ื“ ื”ื‘ื, ~~~ [php] $post=Post::model()-ยปfind(array( 'select'=ยป'title', 'condition'=ยป'postID=:postID', 'params'=ยปarray(':postID'=ยป10), )); ~~~ ยป Info|ืžื™ื“ืข: ื›ืฉื”ืชื ืื™ ืฉืœ ื”ืฉืื™ืœืชื” ืขื•ืกืง ื‘ื”ืชืืžืช ืขืžื•ื“ื•ืช ื•ื”ืขืจื›ื™ื ืฉื”ื•ื’ื“ืจื•, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ [findByAttributes()|CActiveRecord::findByAttributes]. ืื ื• ื ื•ืชื ื™ื ืœืคืจืžื˜ืจ `attributes$` ืœื”ื™ื•ืช ืžืขืจืš ืฉืœ ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ื‘ืชื•ืจ ื”ืžืคืชื—ื•ืช ื•ืขืจืš ื›ืœ ืžืคืชื— ื”ื™ื ื• ื”ืขืจืš ืฉืื•ืชื• ืื ื• ืจื•ืฆื™ื ืœื”ืชืืžื™ื ื‘ืฉืื™ืœืชื”. ื‘ื›ืžื” ืคืจื™ื™ืžื•ื•ืจืงื™ื (Frameworks), ืคืขื•ืœื” ื–ื• ื ื™ืชื ืช ืœื‘ื™ืฆื•ืข ืขืœ ื™ื“ื™ ืงืจื™ืื” ืœืžืชื•ื“ื•ืช ื›ืžื• `findByNameAndTitle`. ืœืžืจื•ืช ืฉื’ื™ืฉื” ื–ื• ื ืจืื™ืช ืžื•ืฉื›ืช, ื”ื™ื ื’ื•ืจืžืช ื‘ืจื•ื‘ ื”ืžืงืจื™ื ืœื‘ืœื‘ื•ืœ, ืงื•ื ืคืœื™ืงื˜ื™ื ื•ื‘ืขื™ื•ืช ืฉืœ ืจื’ื™ืฉื•ืช ืœืื•ืชื™ื•ืช ื’ื“ื•ืœื•ืช-ืงื˜ื ื•ืช ื‘ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช. ื›ืฉื™ืฉื ื ืžืกืคืจ ืจื‘ ืฉืœ ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ื ืฉื”ื•ืฆื‘ื• ื‘ืฉืื™ืœืชื”, ืื ื• ื™ื›ื•ืœื™ื ืœื”ื‘ื™ื ืืช ื›ื•ืœื ื™ื—ื“ื™ื• ืขืœ ื™ื“ื™ ืฉื™ืžื•ืฉ ื‘ืžืชื•ื“ื•ืช `findAll` ื”ื‘ืื•ืช, ืœื›ืœ ืื—ืช ืžื”ื ื™ืฉื ื” ืžืชื•ื“ืช `find` ืชื•ืืžืช, ื›ืคื™ ืฉื›ื‘ืจ ื”ืกื‘ืจื ื•. ~~~ [php] // ืžืฆื ืืช ื›ืœ ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ $posts=Post::model()-ยปfindAll($condition,$params); // ืžืฆื ืืช ื›ืœ ื”ืจืฉื•ืžื•ืช ื‘ืขืœื•ืช ื”ืžืคืชื— ื”ืจืืฉื™ ืฉื”ื•ืขื‘ืจ $posts=Post::model()-ยปfindAllByPk($postIDs,$condition,$params); // ืžืฆื ืืช ื›ืœ ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืžืืคื™ื™ื ื™ื ืฉื”ื•ืขื‘ืจื• $posts=Post::model()-ยปfindAllByAttributes($attributes,$condition,$params); // ืžืฆื ืืช ื›ืœ ื”ืจืฉื•ืžื•ืช ืืฉืจ ืชื•ืืžื•ืช ืœืฉืื™ืœืชื” ืฉื”ื•ืขื‘ืจื” $posts=Post::model()-ยปfindAllBySql($sql,$params); ~~~ ื‘ืžื™ื“ื” ื•ืœื ื ืžืฆืื• ื”ืชืืžื•ืช ืœืชื ืื™ื ืฉื”ื•ืฆื‘ื• ื‘ืฉืื™ืœืชื”, `findAll` ืชื—ื–ื™ืจ ืžืขืจืš ืจื™ืง. ื–ื” ืฉื•ื ื” ืžื”ืขืจืš ืฉื™ื•ื—ื–ื•ืจ ืžืžืชื•ื“ื•ืช `find` ื”ืžื—ื–ื™ืจื•ืช null ื‘ืžื™ื“ื” ื•ืœื ื ืžืฆืื• ืจืฉื•ืžื•ืช. ืžืœื‘ื“ ื”ืžืชื•ื“ื•ืช `find` ื• `findAll` ื”ืžืชื•ืืจื•ืช ืœืžืขืœื”, ืœื ื•ื—ื•ืช ื”ืฉื™ืžื•ืฉ ื ื™ืชืŸ ืœื”ืฉืชืžืฉ ื‘ืžืชื•ื“ื•ืช ื”ื‘ืื•ืช ื’ื ื›ืŸ: ~~~ [php] // ืงื‘ืœ ืืช ืžืกืคืจ ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ $n=Post::model()-ยปcount($condition,$params); // ืงื‘ืœ ืืช ืžืกืคืจ ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืฉืื™ืœืชื” ืฉื”ื•ืขื‘ืจื” $n=Post::model()-ยปcountBySql($sql,$params); // ื‘ื“ื•ืง ืื ืงื™ื™ืžืช ืœืคื—ื•ืช ืจืฉื•ืžื” ืื—ืช ื”ืชื•ืืžืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ $exists=Post::model()-ยปexists($condition,$params); ~~~ ืขื“ื›ื•ืŸ ืจืฉื•ืžื” --------------- ืœืื—ืจ ืฉืื•ื‘ื™ื™ืงื˜ AR ืขื ืขืจื›ื™ื ืœืขืžื•ื“ื•ืช, ื ื™ืชืŸ ืœืฉื ื•ืชื ื•ืœืฉืžื•ืจ ืื•ืชื ื‘ื—ื–ืจื” ืœื˜ื‘ืœื” ื‘ืžืกื“. ~~~ [php] $post=Post::model()-ยปfindByPk(10); $post-ยปtitle='ื›ื•ืชืจืช ื—ื“ืฉ'; $post-ยปsave(); // ืฉืžื•ืจ ืืช ื”ืฉื™ื ื•ื™ื™ื ื‘ืžืกื“ ~~~ ื›ืคื™ ืฉื ื™ืชืŸ ืœืจืื•ืช, ืื ื• ืžืฉืชืžืฉื™ื ื‘ืื•ืชื” ืžืชื•ื“ื” [save()|CActiveRecord::save] ื›ื“ื™ ืœื‘ืฆืข ืคืขื•ืœื•ืช ื”ื•ืกืคื” ื•ืขื“ื›ื•ืŸ. ื‘ืžื™ื“ื” ื•ืื•ื‘ื™ื™ืงื˜ ื” AR ื ื•ืฆืจ ื›ืื•ื‘ื™ื™ืงื˜ ื—ื“ืฉ ื‘ืขื–ืจืช ืฉื™ืžื•ืฉ ื‘ืื•ืคืจื˜ื•ืจ `new`, ืงืจื™ืื” ืœ [save()|CActiveRecord::save] ืชื•ืกื™ืฃ ืจืฉื•ืžื” ื—ื“ืฉื” ืœื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื; ื‘ืžื™ื“ื” ื•ืื•ื‘ื™ื™ืงื˜ ื” AR ื ื•ืฆืจ ื›ืชื•ืฆืื” ืžืฉื™ืžื•ืฉ ื‘ืžืชื•ื“ื” ื›ืžื• `find` ืื• `findAll`, ืงืจื™ืื” ืœ [save()|CActiveRecord::save] ืชืขื“ื›ืŸ ืืช ื”ืจืฉื•ืžื” ื”ืงื™ื™ืžืช ื‘ื˜ื‘ืœื”. ืœืžืขืฉื”, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ [CActiveRecord::isNewRecord] ื›ื“ื™ ืœื”ื‘ื—ื™ืŸ ื‘ืžื™ื“ื” ืื•ื‘ื™ื™ืงื˜ ื” AR ื”ื™ื ื• ื—ื“ืฉ ืื• ืœื. ื ื™ืชืŸ ืœืขื“ื›ืŸ ืจืฉื•ืžื” ืื—ืช ืื• ื™ื•ืชืจ ื‘ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ืœืœื ืฆื•ืจืš ื‘ื˜ืขื™ื ืชื ืžืจืืฉ. AR ืžืกืคืงืช ืืช ื”ืžืชื•ื“ื•ืช ื”ื‘ืื•ืช ืœื ื•ื—ื•ืช ื”ืฉื™ืžื•ืฉ: ~~~ [php] // ืขื“ื›ืŸ ืืช ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ Post::model()-ยปupdateAll($attributes,$condition,$params); // ืขื“ื›ืŸ ืืช ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ ื•ืœืžืคืชื—ื•ืช ื”ืจืืฉื™ื™ื ืฉื”ื•ืขื‘ืจื• Post::model()-ยปupdateByPk($pk,$attributes,$condition,$params); // ืขื“ื›ืŸ ืืช ืขืžื•ื“ื•ืช ื”ืกืคื™ืจื” ื‘ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ Post::model()-ยปupdateCounters($counters,$condition,$params); ~~~ ื‘ื“ื•ื’ืžื ืœืžืขืœื”, `attributes$` ื”ื™ื ื• ืžืขืจืš ืฉืœ ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ื•ืขืจื›ื™ื”ื; `counter$` ื”ื™ื ื• ืžืขืจืš ืฉืœ ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ื•ืขืจื›ื™ื”ื ื”ื™ื ื• ืžืกืคืจ ืขื•ืœื”; ื• `condition$` ื• `params$` ื›ืคื™ ืฉื”ื•ืกื‘ืจ ื‘ื—ืœืง ื”ืงื•ื“ื. ืžื—ื™ืงืช ืจืฉื•ืžื” --------------- ืื ื• ื™ื›ื•ืœื™ื ืœืžื—ื•ืง ืจืฉื•ืžื” ื‘ืžื™ื“ื” ื•ืื•ื‘ื™ื™ืงื˜ ื” AR ื ื•ืฆืจ ืขืœ ื™ื“ื™ ืงื‘ืœืช ื”ืžื™ื“ืข ืžื”ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื. ~~~ [php] $post=Post::model()-ยปfindByPk(10); // ื ื ื™ื— ื•ื™ืฉื ื” ื”ื•ื“ืขื” ืขื ื”ืžืกืคืจ 10 $post-ยปdelete(); // ืžื—ืง ืืช ื”ืจืฉื•ืžื” ืžื”ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ~~~ ื”ืขืจื”, ืœืื—ืจ ื”ืžื—ื™ืงื”, ืื•ื‘ื™ื™ืงื˜ ื” AR ื ืฉืืจ ืœืœื ืฉื™ื ื•ื™, ืื‘ืœ ืื•ืชื” ืฉื•ืจื” ื‘ื˜ื‘ืœื” ื‘ืžืกื“ ื”ื ืชื•ื ื™ื ื ืžื—ืงื”. ืœื ื•ื—ื•ืช ื ื™ืชืŸ ืœื”ืขื–ืจ ื‘ืžืชื•ื“ื•ืช ื”ื‘ืื•ืช ื›ื“ื™ ืœืžื—ื•ืง ืจืฉื•ืžื•ืช ืœืœื ืฆื•ืจืš ื‘ื˜ืขื™ื ื” ืžืจืืฉ ืฉืœื”ื: ~~~ [php] // ืžื—ืง ืืช ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ Post::model()-ยปdeleteAll($condition,$params); // ืžื—ืง ืืช ื”ืจืฉื•ืžื•ืช ื”ืชื•ืืžื•ืช ืœืชื ืื™ ืฉื”ื•ืขื‘ืจ ื•ืœืžืคืชื—ื•ืช ืฉื”ื•ืขื‘ืจื• Post::model()-ยปdeleteByPk($pk,$condition,$params); ~~~ ืื™ืžื•ืช ื ืชื•ื ื™ื --------------- ื›ืฉืžื•ืกื™ืคื™ื ืื• ืžืขื“ื›ื ื™ื ืจืฉื•ืžื”, ืื ื• ื‘ื“ืจืš ื›ืœืœ ืฆืจื™ื›ื™ื ืœื‘ื“ื•ืง ื‘ืžื™ื“ื” ื•ื”ืขืจื›ื™ื ืฉื”ื•ืฆื‘ื• ืขื•ื ื™ื ืขืœ ื—ื•ืงื™ื ืžืกื•ื™ื™ืžื™ื. ื–ื” ื‘ื”ื—ืœื˜ ื”ื›ืจื—ื™ ื‘ืžื™ื“ื” ื•ื”ืขืจื›ื™ื ืฉืžื•ืฆื‘ื™ื ืžื•ื’ื“ืจื™ื ืขืœ ื™ื“ื™ ืžืฉืชืžืฉื™ ืงืฆื”. ื‘ื“ืจืš ื›ืœืœ, ืื ื• ืœื ื™ื›ื•ืœื™ื ืœืกืžื•ืš ืขืœ ืฉื•ื ื“ื‘ืจ ื”ืžื’ื™ืข ืžืฆื“ ื”ืœืงื•ื—. AR ืžื‘ืฆืข ืื™ืžื•ืช ื ืชื•ื ื™ื ืื•ื˜ื•ืžื˜ื™ืช ื‘ืขืช ื”ืงืจื™ืื” ืœ [()save|CActiveRecord::save]. ื”ืื™ืžื•ืช ืžื‘ื•ืกืก ืขืœ ื”ื—ื•ืงื™ื ืฉื”ื•ื’ื“ืจื• ื‘ืžืชื•ื“ืช [()rules|CModel::rules] ื‘ืžื—ืœืงืช ื” AR. ืœืžื™ื“ืข ื ื•ืกืฃ ืื•ื“ื•ืช ื”ื’ื“ืจืช ื—ื•ืงื™ ืื™ืžื•ืช ื ืชื•ื ื™ื, ื™ืฉ ืœืงืจื•ื ืืช ื”ื—ืœืง ืื•ื“ื•ืช [ื”ื’ื“ืจืช ื—ื•ืงื™ ืื™ืžื•ืช ื ืชื•ื ื™ื](/doc/guide/form.model#declaring-validation-rules). ืœืžื˜ื” ืžื•ืฆื’ ืงื•ื“ ื‘ืกื™ืกื™ ื”ื ื—ื•ืฅ ืœืฉืžื™ืจื” ืฉืœ ืจืฉื•ืžื”: ~~~ [php] if($post-ยปsave()) { // ื ืชื•ื ื™ื ืื•ืžืชื• ื•ื”ืจืฉื•ืžื” ื ื•ืกืคื”/ืขื•ื“ื›ื ื” } else { // ื ืชื•ื ื™ื ืœื ืชืงื™ื ื™ื. ื™ืฉ ืœืงืจื•ื ืœืžืชื•ื“ื” ื” getErrors() ืœืงื‘ืœืช ื”ืฉื’ื™ืื•ืช ืฉื—ื–ืจื• } ~~~ ื›ืฉื”ืžื™ื“ืข ืœืขื“ื›ื•ืŸ ืื• ื”ื•ืกืคื” ืฉืœ ืจืฉื•ืžื” ืžืชืงื‘ืœ ืขืœ ื™ื“ื™ ืžืฉืชืžืฉื™ ืงืฆื” ื‘ืขื–ืจืช ื˜ื•ืคืก HTML , ืื ื• ืฆืจื™ื›ื™ื ืœื”ืฆื™ื‘ ืื•ืชื ืœืžืืคื™ื™ื ื™ื ื”ืชื•ืืžื™ื ื‘ืžื—ืœืงืช ื” AR. ื ื™ืชืŸ ืœื‘ืฆืข ื–ืืช ื‘ืฆื•ืจื” ื”ื‘ืื”: ~~~ [php] $post-ยปtitle=$_POST['title']; $post-ยปcontent=$_POST['content']; $post-ยปsave(); ~~~ ื‘ืžื™ื“ื” ื•ื™ืฉื ื ืขืžื•ื“ื•ืช ืจื‘ื™ื, ืื ื• ื ืจืื” ืจืฉื™ืžื” ืืจื•ื›ื” ืฉืœ ื”ืฆื‘ื•ืช ื›ืืœื•. ื‘ื›ื“ื™ ืœื”ืงืœ ืขืœ ืชื”ืœื™ืš ื”ื”ืฆื‘ื” ื ื™ืชืŸ ืœื”ืขื–ืจ ื‘ืžืืคื™ื™ืŸ [attributes|CActiveRecord::attributes] ื›ืคื™ ืฉืžื•ืฆื’ ื‘ื“ื•ื’ืžื ืœืžื˜ื”. ืžื™ื“ืข ื ื•ืกืฃ ื ื™ืชืŸ ืœืžืฆื•ื ื‘ื—ืœืง [ืื‘ื˜ื—ืช ื”ืฆื‘ืช ืžืืคื™ื™ื ื™ื](/doc/guide/form.model#securing-attribute-assignments) ื•ื‘ื—ืœืง [ื™ืฆื™ืจืช ืคืขื•ืœื”](/doc/guide/form.action) . ~~~ [php] // ื ื ื™ื— ืฉ $_POST['Post'] ื”ื™ื ื• ืžืขืจืš ืฉืฉืžื•ืช ื”ืžืคืชื—ื•ืช ื”ื™ื ื ืฉืžื•ืช ื”ืขืžื•ื“ื•ืช ื‘ื˜ื‘ืœื” ื•ื”ืขืจื›ื™ื ืฉืœื”ื ื‘ื”ืชืื $post-ยปattributes=$_POST['Post']; $post-ยปsave(); ~~~ ื”ืฉื•ื•ืืช ืจืฉื•ืžื•ืช ----------------- ื‘ื“ื•ืžื” ืœืฉื•ืจื•ืช ื‘ื˜ื‘ืœื”, ืื•ื‘ื™ื™ืงื˜ื™ื ืฉืœ AR ืžื–ื•ื”ื™ื ื‘ืฆื•ืจื” ื™ื—ื•ื“ื™ืช ืขืœ ื™ื“ื™ ื”ืขืจืš ื‘ืžืคืชื— ื”ืจืืฉื™ ืฉืœื”ื. ืœื›ืŸ, ื‘ื›ื“ื™ ืœื”ืฉื•ื•ืช ื‘ื™ืŸ ืฉื ื™ ืื•ื‘ื™ื™ืงื˜ื™ื ืฉืœ AR, ื›ืœ ืžื” ืื ื• ืฆืจื™ื›ื™ื ืœืขืฉื•ืช ื–ื” ืœื”ืฉื•ื•ืช ื‘ื™ืŸ ืืช ืขืจื›ื™ ื”ืžืคืชื—ื•ืช ื”ืจืืฉื™ื™ื ืฉืœื”ื, ื‘ื”ื ื—ื” ืฉื”ื ืฉื™ื™ื›ื™ื ืœืื•ืชื” ืžื—ืœืงืช AR. ืœืžืจื•ืช, ืฉื™ื”ื™ื” ื™ื•ืชืจ ืงืœ ืœืงืจื•ื ืคืฉื•ื˜ ืœืžืชื•ื“ื” [()CActiveRecord::equals]. ยป Info|ืžื™ื“ืข: ื‘ื ื™ื’ื•ื“ ืœื™ื™ืฉื•ื ื•ืฉื™ืžื•ืฉ ืฉืœ AR ื‘ืคืจื™ื™ืžื•ื•ืจืงืก (Frameworks) ืฉื•ื ื™ื ืื—ืจื™ื, Yii ืชื•ืžื›ืช ื‘ืžืคืชื—ื•ืช ืจืืฉื™ื™ื ืžื•ืจื›ื‘ื™ื ื‘ืžื—ืœืงื•ืช ื” AR ืฉืœื”. ืžืคืชื— ืจืืฉื™ ืžื•ืจื›ื‘ ืžื›ื™ืœ ืฉื ื™ ืขืžื•ื“ื•ืช ืื• ื™ื•ืชืจ. ืžืคืชื— ืจืืฉื™ ืžื•ืจื›ื‘ ืžื™ื•ืฆื’ ืขืœ ื™ื“ื™ ืžืขืจืš ื‘ Yii, ื‘ื”ืชืืžื”. ื”ืžืืคื™ื™ืŸ [primaryKey|CActiveRecord::primaryKey] ืžืกืคืง ืืช ืขืจืš ื”ืžืคืชื— ื”ืจืืฉื™ ืฉืœ ืื•ื‘ื™ื™ืงื˜ AR. ื”ืชืืžื” ืื™ืฉื™ืช ------------- [CActiveRecord] ืžืกืคืงืช ื›ืžื” ืžืชื•ื“ื•ืช ืฉื ื™ืชื ื™ื ืœื“ืจื™ืกื” ืขืœ ื™ื“ื™ ืชืชื™ ื”ืžื—ืœืงื•ืช ืฉืœื” ื›ื“ื™ ืœืฉื ื•ืช ืืช ืจืฆืฃ ื”ืขื‘ื•ื“ื” ืฉืœื”ื ื•ื”ื”ืชื ื”ืœื•ืช. - [beforeValidate|CModel::beforeValidate] ื• [afterValidate|CModel::afterValidate]: ืžืชื•ื“ื•ืช ืืœื• ื ืงืจืื•ืช ืœืคื ื™ ื•ืื—ืจื™ ื‘ื™ืฆื•ืข ืื™ืžื•ืช ื”ื ืชื•ื ื™ื. - [beforeSave|CActiveRecord::beforeSave] ื• [afterSave|CActiveRecord::afterSave]: ืžืชื•ื“ื•ืช ืืœื• ื ืงืจืื•ืช ืœืคื ื™ ื•ืื—ืจื™ ืฉืžื™ืจืช ืื•ื‘ื™ื™ืงื˜ AR. - [beforeDelete|CActiveRecord::beforeDelete] ื• [afterDelete|CActiveRecord::afterDelete]: ืžืชื•ื“ื•ืช ืืœื• ื ืงืจืื•ืช ืœืคื ื™ ื•ืื—ืจื™ ืžื—ื™ืงืช ืื•ื‘ื™ื™ืงื˜ AR. - [afterConstruct|CActiveRecord::afterConstruct]: ืžืชื•ื“ื” ื–ื• ืจืฆื” ื‘ื›ืœ ืคืขื ืฉืื•ื‘ื™ื™ืงื˜ AR ื—ื“ืฉ ื ื•ืฆืจ ื‘ืขื–ืจืช ื”ืื•ืคืจื˜ื•ืจ `new`. - [beforeFind|CActiveRecord::beforeFind]: ืžืชื•ื“ื” ื–ื• ืจืฆื” ืœืคื ื™ ืฉื™ืžื•ืฉ ื‘ืื—ื“ ืžืžืชื•ื“ื•ืช ื” `find` (ืœื“ื•ื’ืžื `find`, `findAll`). ืืคืฉืจื•ืช ื–ื• ืงื™ื™ืžืช ืžื’ืจืกืื•ืช 1.0.9 ื•ืžืขืœื”. - [afterFind|CActiveRecord::afterFind]: ืžืชื•ื“ื” ื–ื• ืจืฆื” ืœืื—ืจ ื™ืฆื™ืจืช ืื•ื‘ื™ื™ืงื˜ AR ื›ืชื•ืฆืื” ืžื‘ื™ืฆื•ืข ืฉืื™ืœืชื”. ืฉื™ืžื•ืฉ ื‘ื˜ืจื ื–ืงืฆื™ื” ื‘ืขื–ืจืช AR ------------------------- ื›ืœ ืื•ื‘ื™ื™ืงื˜ AR ืžื›ื™ืœ ืžืืคื™ื™ืŸ ื‘ืฉื [dbConnection|CActiveRecord::dbConnection] ืืฉืจ ืžื™ื™ืฆื’ ืื•ื‘ื™ื™ืงื˜ ืฉืœ [CDbConnection]. ืœื›ืŸ ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ืืคืฉืจื•ืช ืฉืœ [ื˜ืจื ื–ืงืฆื™ื•ืช](/doc/guide/database.dao#using-transactions) ื”ืžืกื•ืคืงืช ืขืœ ื™ื“ื™ ื”-DAO ืฉืœ Yii ื‘ืขืช ื”ืฆื•ืจืš ื‘ืžื”ืœืš ื”ืฉื™ืžื•ืฉ ื‘-AR: ~~~ [php] $model=Post::model(); $transaction=$model-ยปdbConnection-ยปbeginTransaction(); try { // ืฉื™ืžื•ืฉ ื‘ find ื• save ื”ื™ื ื ืฉื ื™ ืฉืœื‘ื™ื ืืฉืจ ื ื™ืชืŸ ืœื”ืชืขืจื‘ ื‘ื”ื ืขืœ ื™ื“ื™ ื‘ืงืฉื•ืช ื ื•ืกืคื•ืช // ืœื›ืŸ ืื ื• ืžืฉืชืžืฉื™ื ื‘ื˜ืจื ื–ืงืฆื™ื” ื›ื“ื™ ืœื•ื•ื“ื ื”ืžืฉื›ื™ื•ืช $post=$model-ยปfindByPk(10); $post-ยปtitle='ื›ื•ืชืจืช ื—ื“ืฉื”'; $post-ยปsave(); $transaction-ยปcommit(); } catch(Exception $e) { $transaction-ยปrollBack(); } ~~~ ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ------------ ยป Note|ื”ืขืจื”: ืชืžื™ื›ื” ื‘ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื ื•ืกืคื” ืžื’ืจืกืื•ืช 1.0.5 ื•ืžืขืœื”. ื”ืจืขื™ื•ืŸ ื”ืžืงื•ืจื™ ืฉืœ ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื”ื’ื™ืข ืž Ruby on Rails. *ืžืจื—ื‘ ืžื•ื’ื“ืจ* ืžื™ื™ืฆื’ *ืฉื* ืฉืœ ืชื ืื™ ื‘ืฉืื™ืœืชื” ืฉื ื™ืชืŸ ืœืื—ื“ ืื•ืชื” ื‘ื™ื—ื“ ืขื ืขื•ื“ ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื•ืœืฆืจืฃ ืœืฉืื™ืœืชื” ืฉืœ AR. ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื‘ื“ืจืš ื›ืœืœ ืžื•ื’ื“ืจื™ื ื‘ืžืชื•ื“ื” [CActiveRecord::scopes] ื‘ื–ื•ื’ื•ืช ื‘ืคื•ืจืžื˜ ืฉืœ ืฉื-ืชื ืื™. ื”ืงื•ื“ ื”ื‘ื ืžื’ื“ื™ืจ ืฉื ื™ ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื, `published` ื• `recently`, ื‘ืžื—ืœืงื” ืฉืœ ื”ืžื•ื“ืœ `Post`: ~~~ [php] class Post extends CActiveRecord { ...... public function scopes() { return array( 'published'=ยปarray( 'condition'=ยป'status=1', ), 'recently'=ยปarray( 'order'=ยป'create_time DESC', 'limit'=ยป5, ), ); } } ~~~ ื›ืœ ืžืจื—ื‘ ืžื•ื’ื“ืจ ื‘ืชื•ืจ ืžืขืจืš ืฉื ื™ืชืŸ ืœืืชื—ืœ ื‘ืขื–ืจืชื• ืื•ื‘ื™ื™ืงื˜ ืฉืœ [CDbCriteria]. ืœื“ื•ื’ืžื, ื”ืžืจื—ื‘ `recently` ืžื’ื“ื™ืจ ืืช ื”ืžืืคื™ื™ืŸ `order` ื‘ืขืจืš `create_time DESC` ื•ืืช ื”ืžืืคื™ื™ืŸ `limit` ืœืขืจืš 5, ืฉืžืชื•ืจื’ื ืœืชื ืื™ ื‘ืฉืื™ืœืชื” ืฉืืžื•ืจ ืœื”ื—ื–ื™ืจ ืืช ื—ืžืฉืช ื”ื”ื•ื“ืขื•ืช ื”ืื—ืจื•ื ื•ืช. ืฉื™ืžื•ืฉื ื”ืขื™ืงืจื™ ืฉืœ ื”ืžืจื—ื‘ื™ื ื”ืžื•ื’ื“ืจื™ื ื”ื™ื ื• *ืฉื™ื ื•ื™ ื•ื”ื’ื“ืจื”* ืฉืœ ืงืจื™ืื” ืœืžืชื•ื“ื•ืช `find` ืฉื•ื ื•ืช. ื ื™ืชืŸ ืœืฉืจืฉืจ ื›ืžื” ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื•ื›ืชื•ืฆืื” ืžื›ืš ืœื‘ืฆืข ืฉืื™ืœืชื” ื”ืจื‘ื” ื™ื•ืชืจ ืžื•ื’ื‘ืœืช ื”ื›ื•ืœืœืช ื™ื•ืชืจ ืชื ืื™ื. ืœื“ื•ื’ืžื, ื‘ื›ื“ื™ ืœืžืฆื•ื ืืช ื”ื”ื•ื“ืขื•ืช ืฉืคื•ืจืกืžื• ืœืื—ืจื•ื ื”, ืื ื• ื™ื›ื•ืœื™ื ืœื”ืฉืชืžืฉ ื‘ืงื•ื“ ื”ื‘ื: ~~~ [php] $posts=Post::model()-ยปpublished()-ยปrecently()-ยปfindAll(); ~~~ ื‘ื“ืจืš ื›ืœืœ, ืžืจื—ื‘ ืžื•ื’ื“ืจ ืฆืจื™ืš ืœื”ื™ื•ืช ืžื•ื’ื“ืจ ืžืฆื“ ืฉืžืืœ (ืœื‘ื•ื ืœืคื ื™) ืฉืœ ื”ืžืชื•ื“ื” `find`. ื›ืœ ืื—ื“ ืžื”ื ืžืกืคืง ืชื ืื™ ืœืฉืื™ืœืชื”, ืืฉืจ ื‘ืกื•ืคื• ืฉืœ ื“ื‘ืจ ืžืชืื—ื“ ืขื ืชื ืื™ื ืื—ืจื™ื, ื›ื•ืœืœ ืืช ื”ืชื ืื™ ืฉื”ื•ืขื‘ืจ ืœืžืชื•ื“ืช `find`, ื•ื™ื•ืฆืจ ืชื ืื™ ืื—ื“ ื’ื“ื•ืœ. ื”ืชื•ืฆืื” ื”ืกื•ืคื™ืช ื“ื•ืžื” ืœื”ื•ืกืคืช ืจืฉื™ืžื” ืฉืœ ืคื™ืœื˜ืจื™ื ืœืฉืื™ืœืชื”. ื”ื—ืœ ืžื’ืจืกื 1.0.6, ื ื™ืชืŸ ืœื”ืฉืชืžืฉ ื‘ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื‘ืžืชื•ื“ื•ืช `update` ื• `delete`. ืœื“ื•ื’ืžื, ื”ืงื•ื“ ื”ื‘ื ื™ืžื—ืง ืืช ื›ืœ ื”ื”ื•ื“ืขื•ืช ืฉื ื•ืกืคื• ืœืื—ืจื•ื ื”: ~~~ [php] Post::model()-ยปpublished()-ยปrecently()-ยปdelete(); ~~~ ยป Note|ื”ืขืจื”: ื ื™ืชืŸ ืœื”ืฉืชืžืฉ ื‘ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ืขืœ ืžืชื•ื“ื•ืช ื‘ืžื—ืœืงื” ื”ื ื•ื›ื—ื™ืช. ื–ืืช ืื•ืžืจืช, ื”ืžืชื•ื“ื” ืฆืจื™ื›ื” ืœื”ืงืจื ืขืœ ื™ื“ื™ `()ClassName::model`. ### ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ืขื ืคืจืžื˜ืจื™ื ื ื™ืชืŸ ืœื”ื•ืกื™ืฃ ืคืจืžื˜ืจื™ื (ืกืžืžื ื™ื) ืœืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื. ืœื“ื•ื’ืžื, ืื ื• ื ืจืฆื” ืœืฉื ื•ืช ืืช ื”ืขืจืš ืฉืœ ื”ื”ื•ื“ืขื•ืช ื‘ืžืจื—ื‘ ื”ืžื•ื’ื“ืจ ื‘ืฉื `recently`. ื‘ื›ื“ื™ ืœืขืฉื•ืช ื–ืืช, ื‘ืžืงื•ื ืœื”ื’ื“ื™ืจ ืืช ืฉื ื”ืžืจื—ื‘ ื‘ืžืชื•ื“ื” [CActiveRecord::scopes], ืื ื• ืฆืจื™ื›ื™ื ืœื”ื’ื“ื™ืจ ืžืชื•ื“ื” ืฉืฉืžื” ื”ื•ื ื–ื”ื” ืœืฉื ื”ืžืจื—ื‘ ื‘ื• ืื ื• ืžืฉืชืžืฉื™ื: ~~~ [php] public function recently($limit=5) { $this-ยปgetDbCriteria()-ยปmergeWith(array( 'order'=ยป'create_time DESC', 'limit'=ยป$limit, )); return $this; } ~~~ ืœืื—ืจ ืžื›ืŸ, ืื ื• ืžืฉืชืžืฉื™ื ื‘ื‘ื™ื˜ื•ื™ ื”ื‘ื ื‘ื›ื“ื™ ืœืงื‘ืœ ืืช ืฉืœื•ืฉืช ื”ื”ื•ื“ืขื•ืช ืฉืคื•ืจืกืžื• ืœืื—ืจื•ื ื”: ~~~ [php] $posts=Post::model()-ยปpublished()-ยปrecently(3)-ยปfindAll(); ~~~ ื‘ืžื™ื“ื” ื•ืœื ื ืขื‘ื™ืจ ืืช ื”ืกืคืจื” 3 ื›ืคืจืžื˜ืจ ื‘ืžืชื•ื“ื” `recently` ื‘ืงื•ื“ ืœืžืขืœื”, ืื ื• ื ืงื‘ืœ ืืช ื—ืžืฉืช ื”ื”ื•ื“ืขื•ืช ืฉืคื•ืจืกืžื• ืœืื—ืจื•ื ื” ื›ื‘ืจื™ืจืช ืžื—ื“ืœ. ### ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื›ื‘ืจื™ืจืช ืžื—ื“ืœ ื‘ืžื—ืœืงื” ืฉืœ ืžื•ื“ืœ ื ื™ืชืŸ ืœื”ื’ื“ื™ืจ ืžืจื—ื‘ ืžื•ื’ื“ืจ ืืฉืจ ื™ืชื•ื•ืกืฃ ืœื›ืœ ื”ืฉืื™ืœืชื•ืช ืฉืื ื• ืžืจื™ืฆื™ื ื‘ืขื–ืจืช ื”ืžื—ืœืงื” (ื›ื•ืœืœ ืงื™ืฉื•ืจื™ื ืœื˜ื‘ืœืื•ืช ืื—ืจื•ืช). ืœื“ื•ื’ืžื, ืืชืจ ื”ืชื•ืžืš ื‘ื›ืžื” ืฉืคื•ืช ื™ืจืฆื” ืœื”ืฆื™ื’ ืชื•ื›ืŸ ื‘ืื•ืชื” ืฉืคื” ืฉื”ืžืฉืชืžืฉ ื›ืจื’ืข ืฆื•ืคื” ื‘ื”. ืžืื—ืจ ื•ื™ืฉื ื ืฉืื™ืœืชื•ืช ืจื‘ื•ืช ื‘ื ื•ื’ืข ืœืชื•ื›ืŸ ื”ืืชืจ, ืื ื• ื™ื›ื•ืœื™ื ืœื”ื’ื“ื™ืจ ืžืจื—ื‘ ืžื•ื’ื“ืจ ื‘ืจื™ืจืช ืžื—ื“ืœ ื‘ื›ื“ื™ ืœืคืชื•ืจ ื‘ืขื™ื” ื–ื•. ื‘ื›ื“ื™ ืœื‘ืฆืข ื–ืืช, ืื ื• ื ืฆื˜ืจืš ืœื“ืจื•ืก ืืช ื”ืžืชื•ื“ื” [CActiveRecord::defaultScope] ื‘ืฆื•ืจื” ื”ื‘ืื”, ~~~ [php] class Content extends CActiveRecord { public function defaultScope() { return array( 'condition'=ยป"language='".Yii::app()-ยปlanguage."'", ); } } ~~~ ื›ืขืช, ื”ื‘ื™ื˜ื•ื™ ื”ื‘ื ืื•ื˜ื•ืžื˜ื™ืช ืžืฉืชืžืฉ ื‘ืชื ืื™ ื›ืคื™ ืฉื”ื•ื’ื“ืจ ืœืžืขืœื”: ~~~ [php] $contents=Content::model()-ยปfindAll(); ~~~ ื™ืฉ ืœื–ื›ื•ืจ ืฉืฉื™ืžื•ืฉ ื‘ืžืจื—ื‘ื™ื ืžื•ื’ื“ืจื™ื ื›ื‘ืจื™ืจืช ืžื—ื“ืœ ืชืงืฃ ืจืง ืœืฉืื™ืœืชื•ืช ืžืกื•ื’ `SELECT`. ื”ื•ื ืื™ื ื• ืชืงืฃ ืœื’ื‘ื™ `INSERT`, `UPDATE` ื• `DELETE` ื•ืคืฉื•ื˜ ื™ืชืขืœื ืžื”ื. ยซdiv class="revision"ยป$Id: database.ar.txt 1681 2010-01-08 03:04:35Z qiang.xue $ยซ/divยป
Chris Fox, CP24.com Peel Regional Police say that two young girls who had been missing since Friday night have been found safe and sound. The girls were reported missing after being last seen at an address in the Key Court area of Mississauga at around 8:30 p.m. on Friday. Prior to being located on Saturday morning, police said that both girls were believed to be somewhere in downtown Toronto. In a message posted to Twitter at around noon, Peel police said that the girls are safe and thanked the community for its help in the investigation.
Q: scipy.minimize - "TypeError: numpy.float64' object is not callable running" Running the scipy.minimize function "I get TypeError: 'numpy.float64' object is not callable". Specifically during the execution of: .../scipy/optimize/optimize.py", line 292, in function_wrapper return function(*(wrapper_args + args)) I already looked at previous similar topics here and usually this problem occurs due to the fact that as first input parameter of .minimize is not a function. I have difficulties in figure it out, because "a" is function. What do you think? ### "data" is a pandas data frame of float values ### "w" is a numpy float array i.e. [0.11365704 0.00886848 0.65302202 0.05680696 0.1676455 ] def a(data, w): ### Return a negative float value from position [2] of an numpy array of float values calculated via the "b" function i.e -0.3632965490830499 return -b(data, w)[2] constraint = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1}) ### i.e ((0, 1), (0, 1), (0, 1), (0, 1), (0, 1)) bound = tuple((0, 1) for x in range (len(symbols))) opts = scipy.minimize(a(data, w), len(symbols) * [1. / len(symbols),], method = 'SLSQP', bounds = bound, constraints = constraint) A: Short answer It should instead be: opts = scipy.minimize(a, len(symbols) * [1. / len(symbols),], args=(w,), method='SLSQP', bounds=bound, constraints=constraint) Details a(data, w) is not a function, it's a function call. In other words a(data, w) effectively has the value and type of the return value of the function a. minimize needs the actual function without the call (ie without the parentheses (...) and everything in-between), as its first parameter. From the scipy.optimize.minimize docs: scipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None) ... fun : callable The objective function to be minimized. Must be in the form f(x, *args). The optimizing argument, x, is a 1-D array of points, and args is a tuple of any additional fixed parameters needed to completely specify the function. ... args : tuple, optional Extra arguments passed to the objective function... So, assuming w is fixed (at least with respect to your desired minimization), you would pass it to minimize via the args parameter, as I've done above.
Weโ€™re In Beta! The Silver Lining of the Retail Counterfeiting Culture in China By The Grin Labs At face value the retail counterfeiting culture in China would certainly seem to be a bad thing. No brand wants cheap knock-offs of their products undercutting their profits โ€“ especially those in the luxury market, where prestige and quality goods are brand pillars. But for global brands breaking into the Chinese market, the counterfeiting culture can be a secret weapon if used properly. Before I tell you why, letโ€™s take a look at the culture itself. Counterfeiters donโ€™t discriminate Counterfeiting is an endemic part of the societal fabric in China โ€“ not just with luxury products like handbags, but food, car parts, hospital equipment, etc. According to Homeland Security Newswire, โ€œSome economists say 8 percent of Chinaโ€™s GDP comes from the sales of counterfeit goods, from software to designer clothing.โ€ That was in 2011. Some sources claim that percentage is now higher. Either way, if that number is close to the mark thereโ€™s not much incentive for even the government to actively work to curb the practice. So while brands can try to control the counterfeiting problem โ€“ Kering SA, owner of Gucci, Yves St. Laurent, and other luxury brands recently filed suit with Alibaba for allegedly allowing counterfeiters to sell goods on their website โ€“ they may never get entirely away from it. It will be interesting to see what impact the outcome of this suit may have. In the meantime, without the resources to put an end to counterfeiting, brands are better off using it to their advantage. What Chinese consumers want most in a brand Chinese consumers shop brands they recognize and trust โ€“ which means getting established in China can be a bit of a catch-22. How do you get established without trust, and how do you gain trust if youโ€™re not established? This is one way the counterfeiting culture can benefit brands โ€“ by creating awareness and recognition โ€“ because counterfeiters donโ€™t make products unless thereโ€™s a demand for them. From that perspective, the very nature of a counterfeiting presence is a plus. The issue then becomes about control, and protecting your brandโ€™s IP in the supply chain, which means strategically getting your brand name out there so people get familiar with its value and quality, making authentic products more appealing than the lower-priced knock-offs. How do you create this value for your brand? By: Creating a website on Tmall to build recognition via a consumer-trusted platform you can use as your online store Creating a presence on Taobao, as this is often a first stop for consumers, used more like a search engine for comparison shopping and finding the best prices Focusing promotional efforts on social media โ€“ like WeChat โ€“ since most shopping in China happens via mobile (which is predicted to โ€œsurpass sales on personal computers in 2016 and reach 61.7% of online sales in 2018โ€ according to Beijing-based iResearch), and traditional media isnโ€™t an option for B2C connections Following these tips will help you build your brandโ€™s presence with Chinese consumers and show them the real deal is a better deal from a quality standpoint. Considering โ€œ91% of Chinese consumers say authenticity is the top factor in choosing a productโ€ [Source: Lessons From Chinaโ€™s Counterfeit Crackdown, Bruce Einhorn, BloombergBusinessweek, May 7, 2015] thatโ€™s not an impossible task.
/*============================================================================== Program: 3D Slicer Copyright (c) Laboratory for Percutaneous Surgery (PerkLab) Queen's University, Kingston, ON, Canada. All Rights Reserved. See COPYRIGHT.txt or http://www.slicer.org/copyright/copyright.txt for details. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This file was originally developed by Csaba Pinter, PerkLab, Queen's University and was supported through the Applied Cancer Research Unit program of Cancer Care Ontario with funds provided by the Ontario Ministry of Health and Long-Term Care ==============================================================================*/ #ifndef __qSlicerSegmentationsSettingsPanel_h #define __qSlicerSegmentationsSettingsPanel_h // Qt includes #include <QWidget> // CTK includes #include <ctkSettingsPanel.h> #include "qSlicerSegmentationsModuleExport.h" class QSettings; class qSlicerSegmentationsSettingsPanelPrivate; class vtkSlicerSegmentationsModuleLogic; class Q_SLICER_QTMODULES_SEGMENTATIONS_EXPORT qSlicerSegmentationsSettingsPanel : public ctkSettingsPanel { Q_OBJECT Q_PROPERTY(QString defaultTerminologyEntry READ defaultTerminologyEntry WRITE setDefaultTerminologyEntry) public: typedef ctkSettingsPanel Superclass; explicit qSlicerSegmentationsSettingsPanel(QWidget* parent = nullptr); ~qSlicerSegmentationsSettingsPanel() override; /// Segmentations logic is used for configuring default settings void setSegmentationsLogic(vtkSlicerSegmentationsModuleLogic* logic); vtkSlicerSegmentationsModuleLogic* segmentationsLogic()const; QString defaultTerminologyEntry(); public slots: protected slots: void setAutoOpacities(bool on); void setDefaultSurfaceSmoothing(bool on); void onEditDefaultTerminologyEntry(); void setDefaultTerminologyEntry(QString); void updateDefaultSegmentationNodeFromWidget(); signals: void defaultTerminologyEntryChanged(QString terminologyStr); protected: QScopedPointer<qSlicerSegmentationsSettingsPanelPrivate> d_ptr; private: Q_DECLARE_PRIVATE(qSlicerSegmentationsSettingsPanel); Q_DISABLE_COPY(qSlicerSegmentationsSettingsPanel); }; #endif
Structural rearrangements and chemical modifications in known cell penetrating peptide strongly enhance DNA delivery efficiency. Amphipathic peptides with unusual cellular translocation properties have been used as carriers of different biomolecules. However, the parameters which control the delivery efficiency of a particular cargo by a peptide and the selectivity of cargo delivery are not very well understood. In this work, we have used the known cell penetrating peptide pVEC (derived from VE-cadherin) and systematically changed its amphipathicity (from primary to secondary) as well as the total charge and studied whether these changes influence the plasmid DNA condensation ability, cellular uptake of the peptide-DNA complexes and in turn the efficiency of DNA delivery of the peptide. Our results show that although the efficiency of DNA delivery of pVEC is poor, modification of the same peptide to create a combination of nine arginines along with secondary amphipathicity improves its plasmid DNA delivery efficiency, particularly in presence of an endosomotropic agent like chloroquine. In addition, presence of histidines along with 9 arginines and secondary amphipathicity shows efficient DNA delivery with low toxicity even in absence of chloroquine in multiple cell lines. We attribute these enhancements in transfection efficiency to the differences in the mechanism of complex formation by the different variants of the parent peptide which in turn are related to the chemical nature of the peptide itself. These results exhibit the importance of understanding the physicochemical parameters of the carrier and complex in modulating gene delivery efficiency. Such studies can be helpful in improving peptide design for delivery of different cargo molecules.
Q: Seeing if a request succeeds from within a service worker I have the following code in my service worker: self.addEventListener('fetch', function (event) { var fetchPromise = fetch(event.request); fetchPromise.then(function () { // do something here }); event.respondWith(fetchPromise); }); However, it's doing some weird stuff in the dev console and seems to be making the script load asynchronously instead of synchronously (which in this context is bad). Is there any way to listen for when a request is completed without calling fetch(event.request) manually? For example: // This doesn't work self.addEventListener('fetch', function (event) { event.request.then(function () { // do something here }); }); A: If you want to ensure that your entire series of actions are performed before the response is returned to the page, you should respond with the entire promise chain, not just the initial promise returned by fetch. self.addEventListener('fetch', function(event) { event.respondWith(fetch(event.request).then(function(response) { // The fetch() is complete and response is available now. // response.ok will be true if the HTTP response code is 2xx // Make sure you return response at the end! return response; }).catch(function(error) { // This will be triggered if the initial fetch() fails, // e.g. due to network connectivity. Or if you throw an exception // elsewhere in your promise chain. return error; })); });
The present invention relates to a communication element, which a boundary scan element for use in a wiring check for an electronic circuit substrate is applied to, and a communication apparatus using the same. A boundary scan test method is proposed as a method of checking whether or not ICs packaged in an electronic circuit substrate are properly interconnected or whether or not an internal processing is properly executed in the ICs themselves. This boundary scan test method is the test method which is applied to the electronic circuit substrate comprising the ICs in which a boundary scan element is previously incorporated. This boundary scan test method has a feature that connection check or IC operation test for an circuit substrate having such a high density that a so-called in-circuit test method cannot be employed can be performed. An example of the conventional boundary scan element is now outlined. FIG. 3 is a block diagram of a logic IC 100 to be tested comprising the boundary scan element. The IC 100 comprises input terminals 101, output terminals 102 and an internal logic 111, as a basic constitution. The IC 100 further comprises the boundary scan element. The boundary scan element comprises input-side boundary cells 103, output-side boundary cells 104, a TDI terminal 105 to which data is inputted, a TDO terminal 106 from which the data is outputted, a TMS terminal 107 to which a signal for switching operation modes is inputted, a TCK terminal 108 to which a clock signal is inputted, a TRS terminal 109 to which a reset signal is inputted and a TAP circuit 110. The input-side and output-side boundary cells 103 and 104 are separately provided for the respective input and output terminals 101 and 102. All the boundary cells 103 and 104 are connected in series in chain together. The TDI terminal 105 and the TDO terminal 106 are connected to the input-side boundary cell 103 and the output-side boundary cell 104, respectively, of the boundary cells 103 and 104 located at both the ends. While the TAP circuit 110 is synchronized to the clock signal from the TCK terminal 108, the TAP circuit 110 executes the processing in accordance with the signal for switching the operation modes from the TMS terminal 107. That is, the data is shifted to the boundary cells 103 and 104, or the data is inputted and outputted between the boundary cells 103 and 104 and the internal logic 111 or the input or output terminals 101 or 102. The TAP circuit 110 enters a reset state in accordance with the reset signal from the TRS terminal 109. This TRS terminal 109 is not always needed because the reset state can be included in one of commands to switch the operation modes from the TMS terminal 107. In the method of testing the IC 100 comprising such a constitution, the operation test for the IC is performed, e.g., as follows: test data is inputted from a host computer to the TDI terminal 105 in a serial form, and the test data is shifted and set to each of the input-side boundary cells 103. Then, the set test data is outputted to and processed by the internal logic 111. Subsequently, the data from the internal logic 111 is set to the output-side boundary cells 104, and this data is then returned from the TDO terminal 106 to the host computer in the serial form. The host computer compares the returned data with the test data which the host computer previously sent out, whereby the host computer can distinguish whether or not the internal logic 111 normally operates. The test for the connection between the ICs is carried out, e.g., as follows: the test data is sent and set from the host computer to the output-side boundary cells 104 through the TDI terminal 105 and the input-side boundary cells 103. This data is sent out from the output terminals 102 to another IC connected to the output terminals 102 of the IC 100. Then, the host computer compares the test data which another IC received with the test data which the host computer had previously sent out, whereby the host computer can distinguish whether the wiring between the ICs is connected or disconnected, or the like. On the other hand, the inventor has focused on the usefulness of the boundary scan element not as the element only for checking the wiring connection or the like but as a communication element for controlling various terminal equipment such as a CCD camera. The inventor has therefore proposed a communication apparatus in which this boundary scan element is applied to the communication element (International Publication No. WO98/55925). However, the conventional boundary scan element is not satisfactory in a data transfer rate as the communication element. That is, the conventional boundary scan element has a problem as follows: in order to set the data inputted from the TDI terminal 105 to each of the boundary cells 103 or 104, the data in each of the individual boundary cells 103 or 104 must be shifted sequentially. This problem is similarly caused when the data set in the boundary cells 103 or 104 is outputted from the TDO terminal 106. The data transfer rate is not sufficient, particularly in the case of a large number of boundary cells 103 and 104. It is therefore an object of the present invention to provide a communication element, which the boundary scan element is applied to and which can increase the data transfer rate, and a communication apparatus using the same. According to the present invention, there is provided a communication element which comprises a plurality of input-side boundary cells; a plurality of output-side boundary cells corresponding to the input-side boundary cells; and a TAP circuit for controlling the input and output of data to/from the input-side and output-side boundary cells, the TAP circuit being connected to a TCK line to which a clock signal is inputted, a TMS line to which a mode signal for switching operation modes is inputted, and data input and output lines for inputting and outputting the data to/from terminal equipment which is an object of communication, wherein the input-side boundary cells are connected in parallel to the corresponding output-side boundary cells through the TAP circuit. In this element, the boundary cells are not connected in series in chain together like the prior art, but the input-side boundary cells are connected in parallel to the corresponding output-side boundary cells through the TAP circuit. Thus, the data stored in the input-side boundary cells can be transferred to the corresponding output-side boundary cells by one processing. Consequently, a data transfer rate can be increased. In the communication element of the present invention, the boundary cells are not connected in series. To input or output the data between the communication element and the host computer or the like, the data is thus inputted or outputted directly to/from the boundary cells in a parallel form, not in a serial form through a TDI or TDO terminal like the prior art. According to the present invention, there is provided a communication apparatus, which comprises a plurality of communication elements of the present invention; terminal equipment separately connected to each of the communication elements, for inputting and outputting the data to/from the communication elements through the data input and output lines; and a host computer, wherein the communication elements are connected in series manner to the host computer. According to this means, the use of the communication element of the present invention allows increasing the data transfer rate at which the data is transferred between the communication elements and between the communication element and the host computer. Thus, the large amount of data can be processed. In the present invention, the terminal equipment means the object which the communication apparatus of the present invention communicates with. For example, a monitoring apparatus installed in every floor or room of a building, a security apparatus or various robots in a production line correspond to this terminal equipment. Since the communication apparatus of the present invention can transfer the data at high rate, the terminal equipment requiring the large-capacity data, in particular, can be also the object of communication.
/* * Copyright 2000-2013 JetBrains s.r.o. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /* * @author max */ package com.intellij.openapi.util; import org.jetbrains.annotations.Nullable; @FunctionalInterface public interface NullableComputable<T> extends Computable<T> { @Override @Nullable T compute(); }
Q: How to pass a member function as an argument? I have the following Classes: typedef void (*ScriptFunction)(void); typedef std::unordered_map<std::string, std::vector<ScriptFunction>> Script_map; class EventManager { public: Script_map subscriptions; void subscribe(std::string event_type, ScriptFunction handler); void publish(std::string event); }; class DataStorage { std::vector<std::string> data; public: EventManager &em; DataStorage(EventManager& em); void load(std::string); void produce_words(); }; DataStorage::DataStorage(EventManager& em) : em(em) { this->em.subscribe("load", this->load); }; I want to be able to pass DataStorage::load to EventManager::subscribe so i can call it later on. How can i achieve this in c++? A: The best way to do this would be with an std::function: #include <functional> typedef std::function<void(std::string)> myFunction; // Actually, you could and technically probably should use "using" here, but just to follow // your formatting here Then, to accept a function, you need simply need to do the same thing as before: void subscribe(std::string event_type, myFunction handler); // btw: could just as easily be called ScriptFunction I suppose Now the tricky part; to pass a member function, you actually need to bind an instance of DataStorage to the member function. That would look something like this: DataStorage myDataStorage; EventManager manager; manager.subscribe("some event type", std::bind(&DataStorage::load, &myDataStorage)); Or, if you're inside a member function of DataStorage: manager.subscribe("some event type", std::bind(&DataStorage::load, this));
1. Introduction {#sec1-sensors-19-04362} =============== The internet of things (IoT) is the network of physical objects embedded with sensors that are enabling real-time observations about the world as it happens. With estimates of there being 50 billion connected objects by 2020 \[[@B1-sensors-19-04362]\], there will be an enormous amount of sensor observation data being continuously generated per second. These sensor observation data sources, in combination with existing data and services on the internet, are enabling a wide range of innovative and valuable applications and services in smart cities, smart grids, industry 4.0, intelligent transportation systems, etc. To be able to extract, meaningful information from heterogeneous sensor data sources in a variety of formats and protocols, the semantic web community has extended the Resource Description Framework (RDF) data model that has been widely used for representing web data, to connect dynamic data streams generated from IoT devices, e.g., sensor readings, with any relevant knowledge base, in order to create a single graph as an integrated database serving any analytical queries on a set of nodes/edges of the graph \[[@B2-sensors-19-04362],[@B3-sensors-19-04362],[@B4-sensors-19-04362],[@B5-sensors-19-04362]\]. However, most current approaches using the RDF data model for managing sensor data, called linked sensor data, assume that RDF stores are able to handle queries on rapidly updating data streams in conjunction with massive volumes of data. Data generated by sensors is also providing a meaningful spatio--temporal context, i.e., they are produced in specific locations at a specific time. Therefore, all sensor data items can be represented in three dimensions: the semantic, spatial and temporal dimensions. Consider the following example: "What was the average temperature during the past 30 min for Dublin city?". This simple example poses an aggregate query across weather temperature readings from all weather stations in Dublin city. In this example, the semantic dimension describes the average temperature for Dublin city. The spatial dimension describes the place (Dublin city). The temporal dimension describes the time when the temperature values were generated (within the past 30 min). Unfortunately, supporting such multidimensional analytical queries on sensor data is still challenging in terms of complexity, performance, and scalability. In particular, these queries imply heavy aggregation on a large number of data points along with computation-intensive spatial and temporal filtering conditions. Moreover, the high update frequency and large volume natures of our targeted systems (around ten thousand updates per second on billions of records already in the store) will increase the burden of answering the query within some seconds or milliseconds. On top of that, by their nature, such systems need to scale to millions of sensor sources and years of data. Motivated by such challenges, in this article, we present EAGLE, a scalable spatio--temporal query engine, which is able to index, filter, and aggregate a high throughput of sensor data together with a large volume of historical data stored in the engine. The engine is backed by distributed database management systems, i.e., OpenTSDB for temporal data and ElasticSearch for spatial data, and allows us to store a billion data points and ingest a large number of records per second while still being able to execute a spatio--temporal query in a timely manner. In summary, our contributions are as follows:A proposed distributed spatio--temporal sub-graph partitioning solution which significantly improves spatio--temporal aggregate query performance.An implementation of a comprehensive set of spatial, temporal and semantic query operators supporting computation of implicit spatial and temporal properties in RDF-based sensor data.An extensive performance study of the implementation using large real-world sensor datasets along with a set of spatio--temporal benchmark queries. The remainder of the article is organized as follows. In [Section 2](#sec2-sensors-19-04362){ref-type="sec"}, we review related work on current solutions in existence. [Section 3](#sec3-sensors-19-04362){ref-type="sec"} describes the EAGLE engine architecture. The spatio--temporal storage model is given in [Section 4](#sec4-sensors-19-04362){ref-type="sec"}. In [Section 5](#sec5-sensors-19-04362){ref-type="sec"}, we present our spatio--temporal query language support through a series of examples. [Section 6](#sec6-sensors-19-04362){ref-type="sec"} elaborates on the implementation of our engine and its infrastructure to store and query sensor data. An experimental evaluation of this implementation follows in [Section 7](#sec7-sensors-19-04362){ref-type="sec"}. Finally, we conclude and discuss future work in the last section. 2. Background and Related Work {#sec2-sensors-19-04362} ============================== 2.1. Sensor Ontologies {#sec2dot1-sensors-19-04362} ---------------------- During the last decade, an extensive amount of ontologies have been proposed, which aim to address the challenge of modeling a sensor network and its data, and also to tackle the heterogeneity problems associated with the hardware, software, and the data management aspect of sensors. More precisely, they provide a means to semantically describe the sensor networks, the sensing devices, the sensor data, and enable sensor data fusion. The state-of-the-art approach in this area is the work from the Open Geospatial Consortium Sensor Web Enablement (OGC SWE) working group \[[@B6-sensors-19-04362]\]. They have specified a number of standards that define formats for sensor data and metadata as well as sensor service interfaces. These standards allow the integration of sensor and sensor networks into the web, in what is called the sensor web. In particular, they provide a set of standard models and XML schema for metadata descriptions of sensors and sensor systems, namely the SensorML \[[@B7-sensors-19-04362]\] and observations and measurements (O&M) models for data observed or measured by sensors \[[@B8-sensors-19-04362],[@B9-sensors-19-04362]\]. A lack of semantic compatibility, however, is the primary barrier to realizing a progressive sensor web. In \[[@B10-sensors-19-04362]\], Amit et al. propose the semantic sensor web (SSW) that leverages current standardization efforts of the OGC SWE in conjunction with the semantic web activity of the World Wide Web Consortium W3C ([www.w3.org/2001/sw/](www.w3.org/2001/sw/)) to provide enhanced descriptions and meaning to sensor data. In comparison with the sensor web, the SSW addresses the lack of semantic compatibility by adding semantic annotations to the existing SWE standard sensor languages. In fact, these improvements aim to provide more meaningful descriptions to sensor data than SWE alone. Moreover, the SSW acts as a linking mechanism to bridge the gap between the primarily syntactic XML-based metadata standards of the SWE and the RDF/OWL-based metadata standards of the semantic web. The work in \[[@B11-sensors-19-04362]\] describes a practical approach for building a sensor ontology, namely OntoSensor, that uses the SensorML specification and extends the suggested upper merged ontology (SUMO) \[[@B12-sensors-19-04362]\]. The objective of OntoSensor is to build a prototype sensor knowledge repository with advanced semantic inference capabilities to enable fusion processes using heterogeneous data. For that reason, in addition to reusing all SensorML's concepts \[[@B7-sensors-19-04362]\], OntoSensor provides additional concepts to describe the observation data, i.e., the geolocation of the observations, the accuracy of the observed data or the process to obtain the data. Similar to OntoSensor, the W3C Semantic Sensor Network Incubator group (SSN-XG) has defined the SSN ontology \[[@B3-sensors-19-04362]\] in order to overcome the missing semantic compatibility in OGC SWE standards, as well as the fragmentation of sensor ontologies into specific domains of application. The SSN ontology can be considered as a sort of standard for describing sensors and their resources with respect to the capabilities and properties of the sensors, measurement processes, observations, and deployment processes. It is worth mentioning that, although the SSN ontology provides most of the necessary details about different aspects of sensors and measurements, it does not describe domain concepts, time, location, etc. Instead, it can be easily associated with other sources of knowledge concerning, e.g., units of measurement, domain ontologies (agriculture, commercial products, environment, etc.). This helps to pave the way for the construction of any domain-specific sensors ontology. Because of its flexibility and adaptivity, the ontology has become more general and has been used in many research projects and applied to several different domains in recent years. Some of the most recently published works that utilize the SSN ontology are the OpenIoT Project \[[@B13-sensors-19-04362]\], the FIESTA-IoT (<http://fiesta-iot.eu/>) \[[@B14-sensors-19-04362]\], VITAL-IoT (<http://www.vital-iot.eu/>) and GeoSMA \[[@B15-sensors-19-04362]\]. The broad success of the initial SSN led to a follow-up standardization process by the first joint working group of the OGC and the W3C. This collaboration aims to revise the SSN ontology based on the lessons learned over the past number of years and more specifically, to address changes in scope and audience, some shortcomings of the initial SSN, as well as technical developments and trends in relevant communities. The resulting ontology, namely the SOSA ontology \[[@B16-sensors-19-04362]\], provides a more flexible coherent framework for representing the entities, relations, and activities involved in sensing, sampling, and actuation. The ontology is intended to be used as a lightweight, easy to use, and highly expendable vocabulary that appeals to a broad audience beyond the semantic web community, but that can be combined with other ontologies. The SOSA/SSN ontologies also form the core model that has been used to model our sensor data \[[@B17-sensors-19-04362],[@B18-sensors-19-04362]\]. 2.2. Triple Stores and Spatio--Temporal Support {#sec2dot2-sensors-19-04362} ----------------------------------------------- The current standard query language for RDF, i.e., SPARQL 1.1, does not support spatio--temporal query patterns on sensor data. Recently, there have been several complimentary works towards supporting spatio--temporal queries on RDF. For example, to enable spatio--temporal analysis, in \[[@B19-sensors-19-04362]\], Perry et al. propose the SPARQL-ST query language and introduce the formal syntax and semantics of their proposed language. SPARQL-ST is extended from the SPARQL language to support complex spatial and temporal queries on temporal RDF graphs containing spatial objects. With the same goal as SPARQL-ST, Koubarakis et al. propose st-SPARQL \[[@B20-sensors-19-04362]\]. They introduce stRDF as a data model to model spatial and temporal information and the stSPARQL language to query against stRDF. Another example is \[[@B21-sensors-19-04362]\], where Gutierrez et al. propose a framework that introduces temporal RDF graphs to support temporal reasoning on RDF data. In this approach, the temporal dimension is added to the RDF model. The temporal query language for temporal RDF graphs is also provided. However, the aforementioned works commonly focus on enabling spatio--temporal query features, but hardly any of them fully address the performance and scalability issues of querying billions of triples \[[@B22-sensors-19-04362]\]. Regarding having to deal with the performance and scalability of RDF stores, many centralized and distributed RDF repositories have been implemented to support storing, indexing and querying RDF data, such as Clustered TDB \[[@B23-sensors-19-04362]\], Inkling \[[@B24-sensors-19-04362]\], RDFStore (<http://rdfstore.sourceforge.net>), Jena \[[@B25-sensors-19-04362]\], and 4Store (<http://4store.org>). These RDF repositories are fast and able to scale up to many millions of triples or a few billion triples. However, none of the systems take the spatio--temporal features into consideration. Toward supporting spatial queries on RDF stores, Brodt et al. \[[@B26-sensors-19-04362]\] and Virtuoso (<https://github.com/openlink/virtuoso-opensource>) utilize RDF query engines and spatial indices to manage spatial RDF data. Reference \[[@B26-sensors-19-04362]\] uses RDF-3x as the base index and adds a spatial index for filtering entities before or after RDF-3x join operations. Another example is OWLIM \[[@B27-sensors-19-04362]\], which supports a geospatial index in its Standard Edition (SE). However, none of them systematically address the issue of elasticity and scalability for spatio--temporal analytic functions to deal with the massive volume of sensor data. The technical details and the index performance are also not mentioned in such system descriptions. Moreover, these approaches only support limited spatial functions, and the spatial entities have to follow the GeoRSS GML \[[@B28-sensors-19-04362]\] model. Such systems are not aware of the temporal nature of linked sensor data that might be distributed over a long time span. For example, in our evaluations, most of the data is continuously archived for 10 months or even 10 years, for weather data. Therefore, such systems can easily run into scalability issues when the data grows. In one of our experiments \[[@B29-sensors-19-04362]\], a triple store crashed after a few weeks ingesting weather sensor readings from 70,000 sensor stations and the system could not reliably answer any simple queries with a few billion triples in the store. Taking such limitations into consideration, the work presented in this article is a new evolution of our series of efforts \[[@B18-sensors-19-04362],[@B29-sensors-19-04362],[@B30-sensors-19-04362],[@B31-sensors-19-04362],[@B32-sensors-19-04362],[@B33-sensors-19-04362]\] towards managing sensor data, together with other related work in the community. The main focus of this work is designing a query engine that is able to support complex spatio--temporal queries tailored towards managing linked sensor data, while the engine is also capable of dealing with the aforementioned performance and scalability issues. The design of such an engine is presented in the next section. 3. System Architecture {#sec3-sensors-19-04362} ====================== The architecture of EAGLE is illustrated in [Figure 1](#sensors-19-04362-f001){ref-type="fig"}. The engine accepts sensor data in RDF format as input and returns an output in SPARQL Result form (<https://www.w3.org/TR/rdf-sparql-XMLres/>). The general processing works as follows. When the linked sensor data is fed to the system, it is first analyzed by the data analyzer component. The data analyzer is responsible for analyzing and partitioning the input data based on the RDF patterns that imply the spatial and temporal context. The output sub-graphs of the data analyzer will be converted by the data transformer to the compatible formats of the underlying databases. The index router module then receives the transformed data and forwards them to the corresponding sub-database components in the data manager. In the data manager, we choose Apache Jena TDB (<https://jena.apache.org/>), OpenTSDB (<http://opentsdb.net/>) \[[@B34-sensors-19-04362]\], and ElasticSearch (<https://www.elastic.co/>) as underlying stores for such partitioned sub-graphs. To execute the spatio--temporal queries, a query engine module is introduced. The query engine consists of several sub-components that are responsible for parsing the query, generating the query execution plan, rewriting the query into sub-queries and delegating sub-query execution processes to the underlying databases. The data manager executes these sub-queries and returns the query results. After that, the data transformer transforms the query results accordingly to the format that the query delegator requires. Details of EAGLE's components are described in the following subsections. 3.1. Data Analyzer {#sec3dot1-sensors-19-04362} ------------------ As mentioned above, for the input sensor data in RDF format, the data analyzer evaluates and partitions them to the corresponding sub-graphs based on their (spatial, temporal or text). Data characteristics are specified via a set of defined RDF triple patterns. In EAGLE, these RDF triple patterns are categorized into three types: spatial patterns, temporal patterns, and text patterns. The spatial patterns are used to extract the spatial data that need to be indexed. Similarly, temporal patterns extract the sensor observation value along with its timestamp. The text patterns extract the string literals. An example of the partitioning process is illustrated in [Figure 2](#sensors-19-04362-f002){ref-type="fig"}. In this example, we define (?s wgs84:lat ?lat. ?s wgs84:long ?long) and (?s rdfs:label ?label) as the triple patterns used for extracting spatial and text data, respectively. For instance, assume that the system receives a set of input triples shown in Listing 1. ![](sensors-19-04362-i001.jpg) As demonstrated in [Figure 2](#sensors-19-04362-f002){ref-type="fig"}, the two triples (:dublinAirpot wgs84:lat "53.1324"โŒƒโŒƒxsd:float.:dublinAirpot wgs84:long "18.2323"โŒƒโŒƒxsd:float) are found to match the defined spatial patterns (?s wgs84:lat ?lat. ?s wgs84:long ?long), and thus are extracted as a spatial graph. Similarly, we have the text sub-graph (:dublinAirport rdfs:label "Dublin Ariport") extracted. These sub-graphs will be transformed into compatible formats to be used by the indexing process in the data manager. The data transformation process will be presented in the following section. 3.2. Data Transformer {#sec3dot2-sensors-19-04362} --------------------- The Data Transformer is responsible for converting the input sub-graphs received from the data analyzer to the index entities. The index entities are the data records (or documents) constructed to a compatible data structure so that they can be indexed and stored in the data manager. Returning to the example in [Figure 2](#sensors-19-04362-f002){ref-type="fig"}, the data transformer transforms the spatial sub-graph and text sub-graph into ElasticSearch documents. In addition to transforming the sub-graphs into the index entities, the data transformer also has to transform the query outputs generated by the data manager to the format that the query delegator requires. 3.3. Index Router {#sec3dot3-sensors-19-04362} ----------------- The index router receives the index entities generated by the data transformer and forwards them to the corresponding database in the data manager. For example, the spatial and text index entities will be routed to ElasticSearch to index and ones that have temporal values will be transferred to the OpenTSDB cluster. For the index entities that do not match any spatial or temporal patterns, they will be stored in the normal triple store. Due to the fact that access methods can vary across different databases, the index router, therefore, has to support multiple access protocols such as Rest APIs, JDBC, MQTT, etc. 3.4. Data Manager {#sec3dot4-sensors-19-04362} ----------------- Rather than rebuilding the spatio--temporal indices and functions into one specific system, our data manager module adopts a loosely coupled hybrid architecture that consists of different databases for managing different partitioned sub-graphs. More precisely, we used ElasticSearch to index the spatial objects and text values that occur in sensor metadata. Similarly, we used a time-series database, namely OpenTSDB, for storing temporal observation values. The reasons for choosing ElasticSearch and OpenTSDB can be explained as follows: (1) ElasticSearch and OpenTSDB both provide flexible data structures which enable us to store sub-graphs which share similar characteristics but have different graph shapes. For example, stationA and stationB are both spatial objects but they have different spatial attributes (i.e., point vs. polygon, names vs. label, etc.). Moreover, such structures also allow us to dynamically add a flexible number of attributes in a table without using list, set, or bag attributes or redefining the data schema. (2) ElasticSearch supports spatial and full-text search queries. Meanwhile, OpenTSDB provides a set of efficient temporal analytical functions on time-series data. All of these features are the key-point requirements for managing sensor data. (3) Finally, these databases offer clustering features so that we are able to address the "big-data" issue, which is problematic for traditional solutions when dealing with sensor data. For the non-spatio--temporal information that does not need to be indexed in the above databases, this will be stored in the native triple store. We currently use Apache Jena TDB to store such generic data. In the case of a small size dataset, it can be easily loaded into the RAM of a standalone workstation for the sake of boosting performance. ### 3.4.1. Spatial-Driven Indexing {#sec3dot4dot1-sensors-19-04362} To enable querying of spatial data, we transform the sub-graph that contains spatial objects as a geo document and store it in ElasticSearch. [Figure 2](#sensors-19-04362-f002){ref-type="fig"} demonstrates a process that transforms a semantic spatial sub-graph to an ElasticSearch geo document. Please be aware that, along with spatial attributes, ElasticSearch also allows the user to add additional attributes such as date-time, text description, etc. This advanced feature allows us to develop a more complex filter that can combine spatial filters and full-text search in a query. The ElasticSearch geo document structure is shown in Listing 2. In this data structure, location is an ElasticSearch spatial entity used to describe geo--spatial information. It has two properties: type and coordinates. Type can be point, line, polygon, envelope while coordinates can be one or more arrays of longitude/latitude pair. Details of the spatial index implementation will be discussed in [Section 5.1](#sec5dot1-sensors-19-04362){ref-type="sec"}. ![](sensors-19-04362-i002.jpg) ### 3.4.2. Temporal-Driven Indexing {#sec3dot4dot2-sensors-19-04362} A large amount of sensor observation data is fed as a time-series of numeric values such as temperature, humidity and wind speed. For these time-series data, we choose OpenTSDB (Open Time-Series Database) as the underlining scalable temporal database. OpenTSDB is built on top of HBase \[[@B35-sensors-19-04362]\] so that it can ingest millions of time-series data points per second. As shown in [Figure 1](#sensors-19-04362-f001){ref-type="fig"}, input triples which are comprised of numeric values and time-stamps are analyzed and extracted based on the predefined temporal patterns. Based on this extracted data, an OpenTSDB record is constructed and then stored in OpenTSDB tables. In addition to the numeric values and timestamps, additional information can be added to each data record of OpenTSDB. Such information also can be used to filter the temporal data. Additional information is selected by their regular use for filtering data in SPARQL queries. For example, a user might want to filter data by type of sensor, type of reading, etc. The data organization and schema design in OpenTSDB will be discussed in [Section 4](#sec4-sensors-19-04362){ref-type="sec"}. 3.5. Query Engine {#sec3dot5-sensors-19-04362} ----------------- As shown in the EAGLE architecture in [Figure 1](#sensors-19-04362-f001){ref-type="fig"}, the query processing of EAGLE is performed by the query engine that consists of a query parser, a query optimizer, a query rewriter and a query delegator. It is important to mention that our query engine is developed on top of Apache Jena ARQ. Therefore, the query parser is identical to the one in Jena. The query optimizer, query rewriter and query delegator have been implemented by modifying the corresponding components of Jena. For the query optimizer, in addition to Apache Jena's optimization techniques, we also propose a learning optimization approach that is able to efficiently predict a query execution plan for an unforeseen given spatio--temporal query. Details of our approach can be found in our recent publication \[[@B33-sensors-19-04362]\]. The query engine works as follows. First, for a given query, the query parser translates it and generates an abstract syntax tree. Note that, we have modified the query parser so that it can adapt our spatio--temporal query language. Next, the syntax tree is then mapped to the SPARQL algebra expression, resulting in a query tree. In the query tree, there are two types of nodes, namely non-leaf nodes and leaf nodes. The non-leaf nodes are algebraic operators such as joins, and leaf nodes are the variables present in the triple patterns of the given query. Following the SPARQL Syntax Expressions (<https://jena.apache.org/documentation/notes/sse.html>), Listing 3 presents a textual representation of the query tree corresponding to the spatio--temporal query in Example 5 of [Section 6](#sec6-sensors-19-04362){ref-type="sec"}. ![](sensors-19-04362-i003.jpg) Please be aware that the query tree generated by the query parser is just a plain translation of the initial query to the SPARQL algebra. At this stage, there is no optimization technique being applied yet. After that, the query tree is processed by the query optimizer. This component is responsible for determining the most efficient execution plan with regard to the query execution time and resource consumption. After having a proper execution plan, it is passed to the query rewriter for any further processing needed. Basically, the query rewriter rewrites the query operators to the compatible query language of the underlying database. In the next step, the query delegator delegates these rewritten sub-queries to the corresponding database in the data manager. For example, the sub-query that contains the spatial operator or full-text search will be evaluated by ElasticSearch, while the temporal operator is executed by OpenTSDB. For the non-spatio--temporal queries, they are processed by Jena. After having the sub-queries executed, the query results need to be transformed to the format that the query delegator requires. The query delegator then performs any post-processing actions needed. The final step involves formatting the results to be returned to the user. 4. A Spatio--Temporal Storage Model for Efficiently Querying on Sensor Observation Data {#sec4-sensors-19-04362} ======================================================================================= As mentioned in [Section 3](#sec3-sensors-19-04362){ref-type="sec"}, we chose OpenTSDB as an underlying temporal database for managing sensor observation data. In this section, we present a preliminary design of the OpenTSDB data schema used for storing these data sources. Due to the data-centric nature of wide column key-value stores of OpenTSDB, there are two most important decisions on storage model design that can affect to the system performance, which are: the form of the row keys and the partition of data. This section will present, in detail, our decisions for rowkey design and the data partitioning strategy that aim to enhance the data loading and query performance on sensor observation data. 4.1. Opentsdb Storage Model Overview {#sec4dot1-sensors-19-04362} ------------------------------------ OpenTSDB is a distributed, scalable, time-series database built on top of Apache HBase \[[@B35-sensors-19-04362]\], which is modeled after Google's BigTable \[[@B36-sensors-19-04362]\]. It consists of a time-series daemon (TSD) along with a set of command-line utilities. Data reading and writing operations in OpenTSDB are primarily achieved by running one or more of the TSDs. Each TSD is independent. There is no master, no shared state so that many TSDs can be deployed at the same time, depending on the loading throughput requirement. The OpenTSDB architecture is illustrated in [Figure 3](#sensors-19-04362-f003){ref-type="fig"}. Data in OpenTSDB are stored in a HBase table. A table contains rows and columns, much like a traditional database. A cell in table is a basic storage unit, which is defined as \<RowKey,ColumnFamily:ColumnName,TimeStamp\>. There are two tables in OpenTSDB, namely tsdb and tsdb-uid. The tsdb-uid table is used to maintains an index of globally unique identifiers (UID) and values of all metrics and tags for data points collected by OpenTSDB. In this table, two columns exist, one called "name" that maps an UID to a string, and another table, denoted as "id", mapping strings to UIDs. Each row in the column family will have at least one of following three columns with mapping values: metrics for mapping metric names to UIDs, tagk for mapping tag names to UIDs, tagv for mapping tag values to UIDs. [Figure 4](#sensors-19-04362-f004){ref-type="fig"} illustrates the logical view of tsdb-uid table. A central component of OpenTSDB architecture is the tsdb table that stores our time-series observation data. This table is originally designed to not only support time-based queries but also to allow additional filtering on metadata, represented by *tag* and *tag value*. This is accomplished through careful design of the rowkey. As described in [Table 1](#sensors-19-04362-t001){ref-type="table"}, an OpenTSDB rowkey consists of three bytes for the metric id, four bytes for the base timestamp, and three bytes each for the tag name ID and tag value ID, repeated. [Figure 5](#sensors-19-04362-f005){ref-type="fig"} presents an example of tsdb rowkey. As shown in this figure, the schema contains only a single column family, namely "t". This is due to the requirement of HBase that a table has to contain at least one column family \[[@B37-sensors-19-04362]\]. In OpenTSDB, the column family is not so important as it does not affect the organization of data. The column family "t" might consist of one or many column qualifiers representing delta elapse from the base timestamp. In this example, 16 is the value, *1288946927* is the base timestamp and column qualifier *+300* is the delta elapse from base timestamp. 4.2. Designing a Spatio--Temporal Rowkey {#sec4dot2-sensors-19-04362} ---------------------------------------- In order to make well-informed choices of the rowkey design, we first identified common data access patterns required by the user application when querying the sensor observation data. In the following, we enumerated a few common queries that can be expected by the realistic sensor-based applications presented in \[[@B4-sensors-19-04362],[@B13-sensors-19-04362],[@B38-sensors-19-04362],[@B39-sensors-19-04362]\]:A user may request meteorological information of an area over a specific time interval. The query may include more than one measurement values, i.e., humidity, wind speed along with the temperature.A user may request the average observation value over a specific time interval using variable temporal granularity i.e., hourly, daily, monthly, etc.A user may request statistical information about the observation data that are generated by a specific sensor station.A user may ask for statistical information, such as the hottest month over the last year for a specific place of residence. Such queries can become more complex if the residence address is not determined by city name or postal code but by its coordinate. There can be different rowkey design approaches for answering the aforementioned queries. Nevertheless, to have fast access to a relevant data based on the rowkey, there are two points needed to be taken into consideration when designing a rowkey schema for storing sensor observation data in OpenTSDB table: (1) data should be evenly distributed across all RegionServers to avoid the region hot-spotting performance \[[@B37-sensors-19-04362]\]. Note that, a bad key design will lead to sub-optimal load distribution. The solution to address this issue will be presented in [Section 4.3](#sec4dot3-sensors-19-04362){ref-type="sec"}. (2) The spatio--temporal locality of data should be preserved. In other words, data of all the sensor that locate within the same area should be stored in the same partitions on the disk. The latter is essential in order to accelerate range scans since users will probably request data of a specific area over a time interval instead of just a single point in time. Starting with the row key schema, we have to decide what information and in which order will be stored in the row key. Since spatial information is usually the most important aspect of user queries, encoding the sensor location in rowkey is prioritized. In this regard, a geohash algorithm is selected. Recall that a geohash is a function that turns the latitude and longitude into a hash string. A special feature of geohash is that, for a given geohash prefix, all the points within the same space match the common prefix. To make use of this feature, we encode the first three characters of geohash prefix as the metric uid of our rowkey schema. The length of the geohash prefix that is used to encode the metric uid can be various, depending on the data density. Data stored in the tsdb table are sorted on rowkey, thus, encoding geohash as metric uid, which is the first element of rowkey, ensures the data of sensor stations close to each other in space are close to each other on disk. Next, we append the measurement timestamp as the second element of a rowkey in order to preserve temporal ordering. At this stage, we accomplish the goal (2). After defining the first two elements of the row key, the tag names and tag values must be specified. In OpenTSDB, tags are used for filtering data. Based on the summary of common data access patterns above, we recognize that users may filter data by either a detailed location, or by a specific sensor, or by a single type of sensor reading. Therefore, the following tags are defined: (1) the geohash tag to store the full geohash string representing sensor station location; (2) the sensorId to present the full IRI (Information Resource Identifier) of sensor that generates corresponding observation data; (3) the readingtype to indicate the observed property of observation data. These tags are then concatenated after the rowkey. The full form of our proposed row key design is depicted in [Figure 6](#sensors-19-04362-f006){ref-type="fig"}. 4.3. Spatio--Temporal Data Partitioning Strategy {#sec4dot3-sensors-19-04362} ------------------------------------------------ Data partitioning has a significant impact on parallel processing platforms like OpenTSDB. If the sizes of the partitions, i.e., the amount of data per partition, are not balanced, a single worker node has to perform all the work while other nodes idle. To avoid this imbalance performance, in this section, we will present our data partitioning strategy that split data into multiple partitions and also exploits the spatio--temporal characteristics of sensor data. As mentioned earlier, we store observation data in OpenTSDB *tsdb* table, which is originally an HBase table. By design, an HBase table can consist of many regions. A region is a table storage unit, that contains all the rows between the start key and the end key assigned to that region. Regions are managed by the Region Servers, as illustrated in [Figure 7](#sensors-19-04362-f007){ref-type="fig"}. Note that, in HBase, each region server serves a set of regions, and a region can be served only by a single region server. The HMaster is responsible to assign regions to region servers in the cluster. Initially, when a table is created, it is allocated with a single region. Data are then inserted into this region. If the number of data records stored in this region exceeds the given threshold, HBase will partition it into two roughly equal-sized child regions. As more and more data are inserted, this splitting operation is performed recursively. [Figure 8](#sensors-19-04362-f008){ref-type="fig"} describes the table splitting in HBase. Basically, table splitting can be performed automatically by HBase. The goal of this operation is to avoid hot-spotting performance. However, table splitting is a costly task and can result in latency increased, especially during heavy write loads. In fact, splitting is typically followed by regions moving around to balance the cluster, which adds to the overhead and heavily affects to cluster performance. Therefore, to avoid this costly operation, we partition the tsdb table at the time of table creation using the pre-splitting method. For different data sources, the data partitioning strategy might be varied, as it is very dependent upon the rowkey distribution. Therefore, a good rowkey design is also a key factor in the effectiveness of a partitioning strategy. Although HBase already includes partitioners, they do not make use of the spatio--temporal characteristics. In our approach, we partition the tsdb table into a pre-configured number of regions. Each region is assigned with a unique range of geohash prefix. [Figure 9](#sensors-19-04362-f009){ref-type="fig"} illustrates our spatio--temporal data partitioning strategy. In this figure, region 1 is assigned with a range \[0u1--9xz\], indicating that all data records that have rowkey prefixes within the range of \[0u1--9xz\] will be stored in region 1. By applying the spatio--temporal partitioning strategy, we ensure that all sensor data that are near to each other in time and space will be stored in the same partition. As demonstrated later in our experiments in [Section 7](#sec7-sensors-19-04362){ref-type="sec"}, with the help of this strategy, the EAGLE engine is able to quickly locate what partitions actually have to be processed for a query. For example, a spatial intersect query only has to check the items of partitions where the partition bounds themselves intersect with the query object. Such a check can decrease the number of data items to process significantly and thus, also reduce the processing time drastically. 5. System Implementation {#sec5-sensors-19-04362} ======================== In this section, we will present in details the EAGLE's implementation based on the architecture presented in [Section 3](#sec3-sensors-19-04362){ref-type="sec"}. 5.1. Indexing Approach {#sec5dot1-sensors-19-04362} ---------------------- In order to ensure efficient execution of spatio--temporal queries in EAGLE, we must provide a means to extract and index portions of the sensor data based on spatial, temporal and text values. In this section, we firstly present how to define triple patterns for extracting spatial, temporal and text data. After that, we describe in detail the indexing schemes for each aspect of sensor data. ### 5.1.1. Defining Triple Patterns for Extracting Spatio--Temporal Data {#sec5dot1dot1-sensors-19-04362} As mentioned earlier, spatio--temporal and text data included in input RDF sensor data are extracted if their data graph matches the pre-defined triple patterns. In EAGLE, we support several common triple patterns already defined in a set of widely-used ontologies for annotating sensor data, such as GeoSPARQL, OWL-Time, WGS84, SOSA/SSN. For example, we support the GeoSPARQL pattern (?s geo:asWKT ?o) for extracting spatial data. In addition to the commonly used patterns, the ones with user-customized vocabularies are also allowed in our engine. All triple patterns for extracting spatio--temporal data are stored in the data analyzer component. In EAGLE's implementation, these patterns can be defined by either in the configuration files or via provided procedures. Listing 4 illustrates an example of using configuration file to define triple patterns for extracting spatial data. In this example, the two defined predicates, wgs84:lat/wgs84:long and geo:asWKT, are used to extract the spatial information from input RDF graphs. To reduce the learning efforts, our configuration file syntax fully complies with the Jena assembler description syntax \[[@B40-sensors-19-04362]\]. The process to define triple patterns for extracting temporal data is a bit more complicated. As mentioned in [Section 4.3](#sec4dot3-sensors-19-04362){ref-type="sec"}, in addition to the observation value and its timestamp, our OpenTSDB rowkey scheme also stores other attributes such as the full geohash prefix, observed property, sensor URI, etc. Therefore, the triple patterns for extracting this additional information should also be defined. An example of defining triple patterns for extracting temporal data is illustrated in Listing 5. ![](sensors-19-04362-i004.jpg) ![](sensors-19-04362-i005.jpg) In the above example, triple patterns for extracting temporal value are defined as an instance of the temporal:EntityDefinition. Its property, temporal:hasTemporalPredicate, indicates the RDF predicates used in the matching process of temporal data and timestamp. For example, a pair (temporal:value sosa:hasSimpleResult) denotes that the object of triples that match pattern (?s sosa:hasSimpleResult ?o) will be extracted as a temporal value. Similarly, a pair (temporal:time sosa:resultTime) specifies the predicate sosa:resultTime used for extracting the timestamp. Finally, triple patterns that describe additional information are defined under the temporal:hasMetadataPredicate property, i.e., the sensor URI (extracted by sosa:madeBySensor) and the reading type (extracted by sosa:observedProperty). It is worth mentioning that the current generation of EAGLE supports only the time instant. Time interval support will be added in the next version. ### 5.1.2. Spatial and Text Index {#sec5dot1dot2-sensors-19-04362} We store spatial and text data in the ElasticSearch cluster. Therefore, we first need to define the ElasticSearch mappings for storing these data. In ElasticSearch, mapping is the process of defining how a document, and the fields it contains, are stored and indexed. The ElasticSearch geo mapping for storing spatial objects is shown in Listing 6. In this mapping, the uri field stores the geometry IRI, and the full_geohash field stores the 12-bit geohash string of the sensor location. Similarly, the ElasticSearch mapping for storing text value is shown in Listing 7. In EAGLE, we support both bulk and near real-time data indexing. Bulk index is used to import data that are stored in files or in the triple store. For this, we provide a procedure build_geo_text_index(). After having the ElasticSearch mappings defined, the build_geo_text_index() is called to construct a spatial index for a given dataset. The pseudo code of this procedure is given in Algorithm 1. In contrast to the bulk index, the near real-time index is used to index the data that are currently streaming to the engine. In this regard, a procedure dynamic_geo_text_index() is introduced to extract spatial and text data from a streaming triple and index them in ElasticSearch. Algorithm 2 describes the pseudo code of the dynamic_geo_text_index() procedure. ![](sensors-19-04362-i006.jpg) ![](sensors-19-04362-i007.jpg) Algorithm 1: A procedure that will read a given dataset and index its spatial and text data in ElasticSearch. Algorithm 2: A procedure that will read a streaming triple and index its spatial and text data in ElasticSearch. ### 5.1.3. Temporal Index {#sec5dot1dot3-sensors-19-04362} We provide the procedure, namely build_temporal_index, to construct a temporal index for given sensor observation data. The build_temporal_index procedure is split into three steps, as illustrated in Algorithm 3. The procedure is explained as follows. Firstly, the sensor metadata is loaded into the system memory. This metadata is used later for quickly retrieving information needed for constructing OpenTSDB data row, such as sensor location, observed properties, etc. The loaded metadata can be stored in a key-value data structure such as hashmap, array, etc. In the second step, we extract the observation value, its timestamp, and the IRI of source sensor based on the defined triple patterns. After having this information extracted, corresponding observed property and sensor location are then retrieved by querying the loaded metadata in step 1. Thereafter, from the retrieved sensor location, the corresponding geohash prefix is generated via a build_geohash_prefix procedure. The second step is demonstrated in [Figure 10](#sensors-19-04362-f010){ref-type="fig"}. Algorithm 3: A procedure that will read an observation data and index its temporal information in OpenTSDB. The final step is to generate an OpenTSDB data record from the data extracted in the previous steps and store it into OpenTSDB tsdb table. Data are indexed by calling OpenTSDB APIs such as *put* command, REST APIs, etc. [Figure 11](#sensors-19-04362-f011){ref-type="fig"} illustrates a simple data insert operation in OpenTSDB using *put* command. 5.2. Query Delegation Model {#sec5dot2-sensors-19-04362} --------------------------- Our query execution process of EAGLE is implemented by a query delegation model which breaks the input query into sub-queries that can be delegated to the underlying sub-components such as ElasicSearch, OpenTSBD, and Apache Jena. In this model, a spatio--temporal query can be represented by the SPARQL query graph model (SQGM) \[[@B41-sensors-19-04362]\]. A query translated into SQGM can be interpreted as a planar rooted directed labeled graph with vertices and edges representing operators and data flows, respectively. In SQGM, an operator processes and generates either an RDF graph (a set of RDF triples), a set of variable bindings or a boolean value. Any operator has the properties input and output. The property input specifies the data flow(s) providing the input data for an operator and output specifies the data flow(s) pointing to another operator consuming the output data. An evaluation process of the graph is implemented by following a post-order traversal, during which the data are passed from the previous node to the next. In this tree, each child node can be executed individually as asynchronous tasks, which can be carried out in different processes on different computers. Therefore, our system delegates some of those evaluation tasks to different distributed backend repositories, which can provide certain function sets, e.g., geospatial functions (by ElasicSearch), temporal analytical functions (by OpenTSDB), BGP matching (by Jena) and achieve the best performance in parallel. [Figure 12](#sensors-19-04362-f012){ref-type="fig"} shows an example of SQGM tree on which a spatial filter node is rewritten to geospatial query and then delegated to ElasticSearch while the BGP matching query is executed by Jena. 6. Query Language Support {#sec6-sensors-19-04362} ========================= In this section, we present our SPARQL query language extensions for querying linked sensor data. We adopt the GeoSPARQL syntax \[[@B42-sensors-19-04362]\] into our proposed extensions for querying topological relations of spatial objects. Furthermore, we also introduce a set of novel temporal analytical property functions for querying temporal data. 6.1. Spatial Built-in Condition {#sec6dot1-sensors-19-04362} ------------------------------- Theoretically, a spatial built-in condition is used to express the spatial constraints on spatial variables. As previously mentioned, our proposed SPARQL's extensions adopt the GeoSPARQL syntax for querying spatial aspect of sensor data. Therefore, in this section, we paraphrase the notion of the spatial built-in conditions that are related to the EAGLE's implementation. More details of GeoSPARQL built-in condition can be found in \[[@B42-sensors-19-04362]\]. It is important to mention that, within the scope of this paper, we only focus on the qualitative spatial function. A qualitative spatial function is a Boolean function $\mathit{f}_{\mathit{s}}$, defined as follows:$$\left. \mathit{f}_{\mathit{s}}:\mathit{G} \times \mathit{G}\rightarrow\mathbb{B} \right.$$ where *G* is a set of geometries. In the current version of EAGLE, several topological relations are supported: disjoint, intersect, contains, within. Following the qualitative spatial function definition, we then define a qualitative spatial expression, denoted by "se":$$< \mathit{se} > :: = \mathit{f}_{\mathit{s}}\left( \mathit{g}_{1},\mathit{g}_{2} \right)$$ where $g_{1},g_{2} \in G \cup V$. *V* is a set of variables. A spatial built-in condition is then defined by using the qualitative spatial function, logical connectives $\neg, \land , \vee$:If $< \mathit{se} >$ is a qualitative spatial function, then $< \mathit{se} >$ is a spatial built-in condition.If $R_{1}$, $R_{2}$ are spatial built-in conditions, then ($\neg R_{1}$), ($R_{1} \vee R_{2}$), and ($R_{1} \land R_{2}$) are spatial built-in conditions. 6.2. Property Functions {#sec6dot2-sensors-19-04362} ----------------------- In addition to spatial built-in condition, we also define a set of spatio--temporal and full-text search property functions. By definition, property function is an RDF predicate in SPARQL query that causes triple matching to happen by executing some specific data processing other than usual graph matching. Property functions must have fixed URI for the predicate and can not represent query variables. The subject or object of these functions can be a list. In our query language support, the property functions are categorized into three types, depending on their function, which are spatial property function, temporal property functions, and full-text search property functions. These three types of property functions are assigned with different URIs (\<geo:\>, \<temporal:\>, \<text:\>). Spatial and temporal property functions are defined as follows. Drawing upon the theoretical treatments of RDF in \[[@B43-sensors-19-04362]\], we assume the existence of pairwise-disjoint countably infinite sets *I*, *B* and *L* that contain IRIs, blank nodes and literals respectively. *V* is a set of query variables. We denote a set of spatial property functions like $I_{SPro}$. Similarly, let $I_{TPro}$ be a set of temporal property functions. $I_{SPro}$, $I_{TPro}$ and I are also pairwise-disjoint. A triple that contains a spatial property function is defined in the following form:$$\left( I \cup B \right) \times I_{SPro} \times \left( I \cup L \cup V \right).$$ *Following is the example of spatial property function, namely geo:sfWithin, to find all the ?geo*~1~* objects that are within ?geo*~2~** $${?\mathit{geo}_{1}}\mathbf{geo:sfWithin}{?\mathit{geo}_{2}}$$ Similarly, the temporal property function is defined as follows:$$\left( I \cup B \right) \times I_{TPro} \times \left( I \cup L \cup V \right).$$ An example of temporal property function is temporal:avg. For the full-text search property function, we only support one property function, namely text:match. The usages of the property functions will be demonstrated via examples in [Section 6.3](#sec6dot3-sensors-19-04362){ref-type="sec"}. 6.3. Querying Linked Sensor Data by Examples {#sec6dot3-sensors-19-04362} -------------------------------------------- This section presents the syntax of our proposed SPARQL's spatio--temporal extensions through a series of examples involving linked sensor data. The dataset is used throughout the examples is the linked meteorological data described later in [Section 7.1.2](#sec7dot1dot2-sensors-19-04362){ref-type="sec"}. The namespaces used in the examples are listed in [Appendix A](#app1-sensors-19-04362){ref-type="app"}. *(Spatial built-in condition query). Return the IRIs and coordinates of all weather stations that locate in Dublin city. The query is shown in Listing 8.* ![](sensors-19-04362-i011.jpg) Let us now explain the query syntax by referring to the above example. Recall that our spatial query language adopts GeoSPARQL syntax, hence, all the GeoSPARQL prefixes, as well as its spatial datatypes, remain unchanged. As illustrated in the query, the spatial variables, ?cityWkt and ?stationWkt, can be used in basic graph patterns and refer to spatial literals. Note that, a spatial variable is an object of a spatial predicate in the triple pattern. In this example, the spatial predicate is geo:asWKT defined in \[[@B42-sensors-19-04362]\]. In addition to the basic graph patterns, the spatial variables are also used in the FILTER expression. Similarly to the spatial predicate, the spatial built-in condition in FILTER expression is also assigned with a unique namespace. The current version of EAGLE supports several topological spatial relations such as geo:sfWithin, geo:sfDisjoint, geo:sfIntersects, geo:sfContains. *(Spatial property function query). Given the latitude and longitude position, it retrieves the number of nearest weather stations that are located within 20 miles. The query is shown in Listing 9.* ![](sensors-19-04362-i012.jpg) The above query demonstrates the usage of geo:sfWithin property function. When this property function is called, a dedicated piece of code will be executed to find all the geometries locate within an area. The area is specified by these arguments (59.783 5.35 20 'miles'). For each spatial object that satisfies the spatial condition, its IRI is bound to the ?stationGeo variable that occurs in the triple representing the property function call. In addition to the default GeoSPARQL syntax of this function, we additionally extend its usage as follows:$$\begin{matrix} {{\mathbf{GeoSPARQL}~\mathbf{syntax}:}{< \mathit{feature}_{1} >}\mathit{geo:sfWithin}{< \mathit{feature}_{2} >}} \\ {{\mathbf{Our}~\mathbf{extension}:}{< \mathit{feature}_{1} >}\mathit{geo:sfWithin}~\left( {< \mathit{lat} >}{< \mathit{lon} >}{< \mathit{radius} >}\left\lbrack {< \mathit{units} >}\left\lbrack {< \mathit{limit} >} \right\rbrack \right\rbrack \right).} \\ \end{matrix}$$ [Table 2](#sensors-19-04362-t002){ref-type="table"} describes the list of spatial property functions that are currently supported in EAGLE. These functions allow the user to specify the query bounding box area by either using the \<geo\> parameter or using the concrete coordinates via \<lat\>, \<lon\>, \<latMin\>, etc. The \<geo\> parameter can be a spatial variable or a spatial RDF literal. Similarly, the \<units\> can be a unit URI or a string value. The supported distance units are presented in [Table 3](#sensors-19-04362-t003){ref-type="table"}. Finally, the \<limit\> parameter is to limit the number of results returned by the function. *(Temporal property function query). Return the list of air temperature observation values that are generated by the station \<got-res:WeatherStation/gu9gdbbysm_ish_1001099999\> from 10th to 15th March 2018. The query is shown in Listing 10.* ![](sensors-19-04362-i013.jpg) The above query demonstrates the example usage of one of our temporal property functions, called temporal:values. In this query, the property function temporal:values is called to retrieve all the temperature observation values that are generated within a specific time interval. Recall that the prefix \<temporal:\> is used to represent the temporal property function. [Table 4](#sensors-19-04362-t004){ref-type="table"} lists all the supported temporal property functions and their syntax. The usages of these functions will be demonstrated in the following examples. *(Analytical spatio--temporal query). Detection of all wind-speed observation in an area within 40 miles from the center of Ohio City during the time from 10 January to 10 February 2017. The Ohio City center coordinate is (40.417287-82.907123). The query is shown in Listing 11.* The query above demonstrates the mix of spatial and temporal property functions. The query uses the spatial function, namely geo:sfWithin, to filter all weather stations that locate in the area (40.417287-82.907123 40 'miles'). Additionally, it also retrieves the list of wind speed observation values generated by these station with the time constraint. *(Analytical spatio--temporal query). Calculate the daily average windspeed at all weather stations that locate within 20 miles from London city center during the time from 10 to 15 March 2018. The query is shown in Listing 12.* ![](sensors-19-04362-i015.jpg) The query demonstrates a complex analytical spatio--temporal query. In this query, we first retrieve the London geometry data by querying the DBPedia dataset. After that, we use the spatial function, namely geo:sfWithin, to query all the stations that locate within 20 miles from London. In the temporal property function used in this query, we demonstrate the usage of the downsampler feature indicated by groupin keyword, and the downsampling aggregation function. Given a brief description, the downsampler feature is our additional temporal query feature which aims to simplify the data aggregation process and to reduce the resolution of data. The data aggregation and data resolution are specified by the downsampling aggregation function, which is formed by \<time interval\>\_\<aggregation function\>. The \<time interval\> is specified in the format \<size\>\<units\> such as 1 h or 30 m. The aggregation function is taken from the list (sum, average, count, min, max). For example, as illustrated in the query, the downsampling aggregation function is 1 d-avg. ![](sensors-19-04362-i014.jpg) Example usage of downsampler can be described as follows. Let us say that a wind-speed sensor is feeding observation data every second. If a user queries for data over an hour-long time span, she would receive 3600 observation data points, something that could be graphed fairly easily in the result table. However, let us consider the case that the user asks for a full week of data. For that, she will receive 604,800 records, thus, leading to a very big result table. Using a downsampler, multiple data points within a time range for a single time series are aggregated together with an aggregation function into a single value at an aligned timestamp. This way, theย number of return values can be reduced significantly. *(Analytical spatio--temporal query). Retrieve the weekly average temperature of area B which has geohash "u0q" in March 2018. This query illustrates the usage of two optional arguments in the temporal property functions, namely geohash and observableProperty. The query is shown in Listing 13.* ![](sensors-19-04362-i016.jpg) *(Full-text search query). Retrieve the total number of observation for each observed property of places that match a given keyword 'Cali'. The query is shown in Listing 14.* ![](sensors-19-04362-i017.jpg) The above query demonstrates the usage of full-text search feature via the text:match property function. The text:match syntax is described as follows:$$\begin{array}{l} \left. {< \mathit{subject} >}{\mathit{text}:\mathit{match}}~\left( < \mathit{property} > \right.{โ€˜\mathit{query}~\mathit{string}โ€™}{< \mathit{limit} >} \right) \\ \end{array}$$ In the text:match function syntax, the \<subject\> implies the subject of the indexed RDF triple. It can be a variable or an IRI. The \<property\> is an IRI, of which the literal is indexed, e.g., rdfs:label and geoname:parentCountry. The 'query string' is the query string fragment following the Lucence syntax (<https://lucene.apache.org/core/2_9_4/queryparsersyntax.html>). For example, the parameter 'Cali\*' is to select all the literals that match prefix "Cali". The optional limit limits the number of literals returned. Note that, it is different than the number of total results the query will return. When a limit is specified in the SPARQL query, it does not affect the full-text search, rather, it only restricts the size of the result set. 7. Experimental Evaluation {#sec7-sensors-19-04362} ========================== In this section, we present a rigorous quantitative experimental evaluation of our EAGLE implementation. We divide the presentation of our evaluation into different sections. [Section 7.1](#sec7dot1-sensors-19-04362){ref-type="sec"} describes the experimental setup which includes the platform and software used, datasets, and queries descriptions. [Section 7.2](#sec7dot2-sensors-19-04362){ref-type="sec"} presents the experimental results. In this section, we compare the data loading throughput and query performance of EAGLE against Virtuoso, Apache Jena and GraphDB. We also discuss the performance differences in EAGLE when applying our data partitioning strategy, described in [Section 4.3](#sec4dot3-sensors-19-04362){ref-type="sec"}. Finally, we evaluate EAGLE's performance on a Google Cloud environment to demonstrate its elasticity and scalability as regards data loading and query performance. The strengths and weaknesses of the EAGLE engine are discussed in [Section 7.3](#sec7dot3-sensors-19-04362){ref-type="sec"}. 7.1. Experimental Settings {#sec7dot1-sensors-19-04362} -------------------------- ### 7.1.1. Platform and Software {#sec7dot1dot1-sensors-19-04362} To demonstrate EAGLE's performance and scalability, we evaluate it on a physical setup and a cloud setup. It is worth mentioning that our physical setup is dedicated to a live deployment of our GraphOfThings application at <http://graphofthings.org> which has been ingesting and serving data from more than 400,000 sensor data sources since June 2014. We compare EAGLE's performance against Apache Jena v3.12, Virtuoso v7 and GraphDB v8.9 (former OWLIM store \[[@B27-sensors-19-04362]\]). Among them, Jena represents the state-of-the-art in terms of a native RDF store, Virtuoso is a widely used RDF store backed by RDBMS, and GraphDB is a clustered RDF store that has recently supported spatial querying. We deployed Apache Jena and Virtuoso v7 on a single machine with the same configuration as in our physical setup below. For EAGLE, we installed ElasticSearch v7 and OpenTSDB v2.3 for both the physical and cloud setups. Similarly, we also installed the GraphDB v8.9 on all setups. Physical setup: we deployed a physical cluster that consists of four servers running on the shared network backbone with 10 Gbps bandwidth. Each server has the following configuration: 2x E5-2609 V2 Intel Quad-Core Xeon 2.5GHz 10MB Cache, Hard Drive 3x 2TB Enterprise Class SAS2 6Gb/s 7200RPM - 3.5" on RAID 0, Memory 32GB 1600MHz DDR3 ECC Reg w/Parity DIMM Dual Rank. One server is dedicated as a front-end server and to coordinating the cluster, and the other three servers are used to store data and run as processing slaves. Cloud setup: the cloud setup was used to evaluate the elasticity and scalability of the EAGLE engine. We deployed a virtual cluster on Google Cloud. The configuration of the Google Cloud instances we use for all experiments is the "n1-standard-2" instance, i.e., 7.5 GB RAM, one virtual core with two Cloud Compute Units, 100 GB instance storage, and an Intel Ivy Bridge platform. In this evaluation, we focused more on showing how the system performance scales when increasing the number of processing nodes, rather than serving as a comparison of its performance with the physical cluster. ### 7.1.2. Datasets {#sec7dot1dot2-sensors-19-04362} Our experimental evaluations are conducted over the linked meteorological dataset which is described in \[[@B18-sensors-19-04362],[@B44-sensors-19-04362]\]. The dataset consists of more than 26,000 meteorological stations allocated around the world and covers various aspects of data distribution. The window of archived data is spread over 10 years, from 2008 to 2018. It has more than 3.7 billion sensor observation records which are represented in the SSN/SOSA observation triple layout (seven triples/records). Hence, the data contains approximately 26 billion triples if it is stored in a native RDF store. Additionally, in order to give a more practical overview of the engine, we evaluated it on even more realistic datasets, especially ones consisting of both spatial and text data. To meet such requirements, we select several datasets from GoT data sources \[[@B18-sensors-19-04362]\]. In particular, to evaluate the spatial data loading throughput, in addition to the sensor station location, we also import the transportation dataset which contains 360 million spatial records. These records were collected from 317,000 flights and 20,000 ships during the time 2015--2016. Similarly, for the text data loading evaluation, we import a Twitter dataset that consists of five million tweets. The detailed statistics of all the datasets used for our evaluations are listed in [Table 5](#sensors-19-04362-t005){ref-type="table"}. ### 7.1.3. Queries {#sec7dot1dot3-sensors-19-04362} We have selected a set of 11 queries that were performed over our evaluation datasets. In general, our queries aim to check the engine processing capability with respect to their provided features for querying linked sensor data. Because the standard SPARQL 1.1 language does not support spatio--temporal queries nor full-text search queries, some RDF stores have to extend the SPARQL language with their own specific syntax. Therefore, some of these queries need to be rewritten so they can be compatible with the engine under test. We summarize some highlighted features of the queries as follows: (i) if the query has an input parameter; (ii) if it requires geospatial search; (iii) if it uses a temporal filter; (iv) if it uses full-text search on string literals; (v) if it has a group-by feature; (vi) if the results need to be ordered via an order-by operator; (vii) if the results are using the limit operator; (viii) the number of variables in the query; and (ix) the number of triple patterns in the query. The group-by, order-by, and limit operators impact on the effectiveness of the query optimization techniques used by the engine (e.g., parallel unions, ordering or grouping using indexes, etc.), and the number of variables and triple patterns give a measure of query complexity. This summary of highlighted features and their SPARQL representations are described in [Appendix A](#app1-sensors-19-04362){ref-type="app"} and [Appendix B](#app2-sensors-19-04362){ref-type="app"}, respectively. 7.2. Experimental Results {#sec7dot2-sensors-19-04362} ------------------------- ### 7.2.1. Data Loading Performance {#sec7dot2dot1-sensors-19-04362} We evaluated EAGLE's performance with respect to data loading throughput on our physical setup and compared it to the state-of-the-art systems. Benchmark data were stored in files and imported via bulk loading. Unlike the general performance comparisons that only focus on triple data loading performance, we measure separately the loading performance of spatial, text and temporal data. The loading speed was calculated via the number of objects that can be indexed per second, instead of the number of triples. This evaluation helped us to have a better understanding of the indexing behavior of the test engines for specific types of data such as geospatial and text. #### Spatial Data Loading Performance With Respect to Dataset Size [Figure 13](#sensors-19-04362-f013){ref-type="fig"} depicts the average spatial data loading speed of the four evaluated RDF systems, with respect to various dataset sizes. The data loading time is shown in [Figure 14](#sensors-19-04362-f014){ref-type="fig"}. Overall, the results reveal that the increase in the data size can significantly affect the loading performance of all systems. Among them, Apache Jena has the worst performance. The average data loading speed is below 10,000 obj/s for all dataset sizes, slower than the other systems. Moreover, it takes almost two days (46.23 h) for loading 658 million spatial data objects. The data loading performance of EAGLE and GraphDB are very close, followed by Virtuoso. For example, EAGLE loads 658 million spatial objects in 7.74 h. Its average throughput is 23,620 obj/s. In the meantime, GraphDB is one hour behind, resulting in 8.72 h and the average speed is 20,960 obj/s. Virtuoso achieves a speed of 17,500 obj/s. The slower insert speed of Virtuoso and Jena can be explained by the limit of single data loading processes in these systems, which are deployed on a single machine. This sharply contrasts with the parallel data loading processes supported by the distributed back-end DBMS in EAGLE (ElasticSearch and OpenTSDB) and GraphDB. We also learned that in the beginning, EAGLE performs slightly behind GraphDB in the case of loading a small dataset (\<350 million). We hypothesize that this is due to several reasons such as load imbalance, increased I/O traffic and platform overheads in EAGLE. However, for loading larger datasets, this comparison result is reversed and the spatial data loading performance of GraphDB is slower than ours. This highlights the capabilities of our system for dealing with the "big data" nature of sensor data. #### Text Data Loading Performance With Respect to Dataset Size To evaluate the text data loading performance, we load the Twitter dataset that consists of five million tweets in the RDF format. The loading speed and loading time are reported in [Figure 15](#sensors-19-04362-f015){ref-type="fig"} and [Figure 16](#sensors-19-04362-f016){ref-type="fig"}, respectively. According to the results, EAGLE outperforms the other systems. We can see in [Figure 15](#sensors-19-04362-f015){ref-type="fig"} that its loading speed is just lightly affected by the data size increase. The highest speed EAGLE can reach is 11,800 obj/s for loading 0.64 million tweets. We attribute this to the outstanding performance of EAGLE's databases, namely ElasticSearch, which is originally a document-oriented database. In the case of loading the same data size, GraphDB is slower than EAGLE. Its average speed is 9700 obj/s. Virtuoso and Jena follow at two and five times slower than EAGLE, respectively. In comparison with the spatial data loading performance, the text data loading speed of EAGLE is much slower. This is reasonable because in order to index the text data, the system needs to analyze the text and break it into a set of sub-strings. Consequently, this requires more computation and resource consumption, and hence, increases the overall loading time. #### Temporal Data Loading Performance With Respect to Dataset Size We evaluated the temporal data loading performance by importing our 10 years of historical linked meteorological data. In this evaluation, we also measured the performance of EAGLE when disabling the spatio--temporal partitioning feature, denoted by EAGLE-NP. Instead of loading the entire temporal dataset, we terminated the loading process at 7.78 billion triples due to the long data loading time, and some of the evaluated systems stop responding. The results in [Figure 17](#sensors-19-04362-f017){ref-type="fig"} and [Figure 18](#sensors-19-04362-f018){ref-type="fig"} draw our attention to the performance of all systems when loading the small dataset. Regardless of the poor performance of Apache Jena, it is apparent that Virtuoso had better loading performance than EAGLE and GraphDB in the case of loaded data sizes under 100 million data points. A possible explanation for this phenomena is the communication latency of the distributed components in GraphDB and EAGLE. More precisely, in these distributed systems, the required time for loading data, plus the time for coordinating the cluster and the network latency are more than the data loading time in Virtuoso. Nevertheless, the difference is acceptable and we believe EAGLE is still applicable for interactive applications that only import a limited amount of data. Another interesting finding is that our system performs differently if the spatio--temporal partitioning strategy is disabled. In this case, the highest insert speed that EAGLE-NP can achieve is 30k obj/s. However, this speed drops dramatically with the growth of the imported data. Moreover, we also observe that EAGLE-NP stops responding when the data size reaches 3.72 billion records, as depicted in [Figure 17](#sensors-19-04362-f017){ref-type="fig"} and [Figure 18](#sensors-19-04362-f018){ref-type="fig"}. Looking at the system log files, we attribute this to a bottleneck in performance that happens with the OpenTSDB tsdb table. As previously explained in [Section 4.3](#sec4dot3-sensors-19-04362){ref-type="sec"}, if the spatio--temporal data partitioning is disabled, the tsdb table is not pre-split, thus, there is only one region of this table that is initialized. In this case, data are only inserted into this region. As a result, when the I/O disk writing speed cannot adapt to a large amount of fed data, the bottleneck phenomena happens. The efficiency of EAGLE is demonstrated when applying our proposed spatio--temporal data partitioning strategy. It is even more explicit in the case of loading a large dataset. As evidenced in [Figure 17](#sensors-19-04362-f017){ref-type="fig"}, unlike the others, the average insert speed of EAGLE almost remains horizontal when the number of data instances increases. In particular, the highest speed that EAGLE can reach is 55,000 obj/s, and there is no significant difference when the number of data points rises from 0.02 to 7.78 billion. Moreover, for loading 7.78 billion temporal triples, EAGLE took only 48.51 h. However, in the same case, GraphDB and Virtuoso need 106.97 h and 113.71 h, respectively. The better rank of EAGLE is attributed to our data partitioning strategy in which we pre-split the tsdb table into multiple data regions in advance of the data loading operation. Because each region is assigned with a range of geohash prefixes, data that has different geohash prefixes managed by different regional servers can be inserted in parallel, resulting in a significant increase in terms of data loading performance. However, when the amount of data stored in a pre-split region reaches the given threshold capacity, the region will be re-split automatically. Together with the splitting process, all related data has to be transferred and distributed again. This step will cause additional cost and will affect the system performance. This explains the slight fluctuation of our system insert speed in [Figure 17](#sensors-19-04362-f017){ref-type="fig"} during the data loading process. ### 7.2.2. Query Performance with Respect to Dataset Size {#sec7dot2dot2-sensors-19-04362} This experiment is designed to demonstrate the query performance of all evaluated systems with respect to different data aspects and dataset size. In this experiment, for each query, we measure the average query execution time by varying the dataset imported. In order to give a more detailed view on query performance, based on the query complexity, we group the test queries into several main categories: spatial query, temporal query, full-text search query, non-spatio--temporal query, and mixed query. These query categories are described in [Table 6](#sensors-19-04362-t006){ref-type="table"}. To conduct a precise performance comparison, we load different datasets that correspond to the query categories. For example, our spatial datasets are used to evaluate the spatial query performance while the sensor observation dataset is for queries that require a temporal filter. For the non-spatio--temporal queries, we use the static dataset that describes the sensor metadata. It is important to mention that our data partitioning approach is only applied for temporal data stored in OpenTSDB and does not explicitly affect the spatial and full-text search query performance in ElasticSearch. Therefore, in the experiments for spatial and full-text search query performance, the performances of EAGLE and EAGLE-NP are not differentiated. #### Non-Spatio--Temporal Query Performance We first evaluated the performance of non-spatio--temporal queries, which were Q2 and Q11. These were the standard SPARQL queries which only query on the semantic aspect of sensor data and have neither spatio--temporal computation nor full-text search. The average query execution times are plotted in [Figure 19](#sensors-19-04362-f019){ref-type="fig"}. The results demonstrate the close performance of EAGLE and Apache Jena in regard to non-spatio--temporal queries. This is explained by the use of similar processing engines. In fact, the SPARQL query processing components in EAGLE are extended from Apache Jena ARQ with some modifications. Meantime, Virtuoso and GraphDB prove their reputations in SPARQL query performance by being faster then EAGLE. However, the difference is still acceptable, in the order of ms. #### Spatial Query Performance [Figure 20](#sensors-19-04362-f020){ref-type="fig"} depicts the query execution time of the spatial query, which is represented by Q1, with respect to varying spatial dataset size. According to the evaluation result, we find that Apache Jena performs poorly. Its spatial query performance linearly increases with increments of the loaded data. In contrast, Virtuoso, GraphDB, and EAGLE perform closely and are weakly influenced by the data size. GraphDB is recognised as having good performance, followed by Virtuoso. Compared to these systems, EAGLE is slightly slower, only in the order of ms. A possible reason could be the overhead of the join operation between the BGP matching and the spatial filter results. Note that, parallel join operations are not yet supported in EAGLE and have to be performed locally in a single thread. #### Full-Text Search Query Performance In the following, we discuss the performance of the full-text search queries (Q8, Q9) for the test systems. The evaluation results are reported in [Figure 21](#sensors-19-04362-f021){ref-type="fig"}. Despite the impressive query execution time of GraphDB and Virtuoso, which are generally less than 500 ms for both Q8 and Q9, EAGLE is still slightly faster. This is again thanks to the outstanding performance of ElasticSearch on full-text search queries. Note that, although Apache Jena, GraphDB and ElasticSearch support full-text search through the use of Lucene, ElasticSearch is notable for having a better optimization. #### Temporal Query Performance The temporal query performance is evaluated over our historical meteorological observation data. [Figure 22](#sensors-19-04362-f022){ref-type="fig"} presents the query execution time of all systems with respect to observation dataset size. It is apparent that query performance is affected by an increase in the amount of data. For Apache Jena, along with a linear increase in the query execution time, we also notice that it is only able to run up to a certain amount of data. As we can see in the figure, it could ideally execute queries with datasets under 0.5 billion data points. However, when executing these queries on a dataset which is over 1.31 billion data points, Apache Jena stops responding. The performances of EAGLE and EAGLE-NP are significantly different. For example, when executing over a dataset of 0.53 billion data observations, if the spatio--temporal data partitioning strategy is not applied, the average execution time of Q6 in EAGLE-NP is 2147 (ms). Meanwhile, if the data partitioning strategy is enabled, EAGLE takes only 589 (ms) to execute the same query, resulting in it running four times faster. Furthermore, we also see a better performance of the EAGLE system in comparison with Virtuoso and GraphDB. The explanations for this performance could be: (1) the effectiveness of our data partitioning strategy so that the engine can quickly locate the required data partition and then organize the scans for a large number of data rows; (2) the power of OpenTSDB query functions that we rely on, especially for data aggregation. #### Mixed Query Performance Another aspect to be considered is the performance of the analytics-based queries that require the mixing of spatial, temporal computations or full-text search (Q4, Q7, Q10). For these queries, we increase the query timeout to 120 (s) due to their high complexities. The evaluation results are shown in [Figure 23](#sensors-19-04362-f023){ref-type="fig"}. Apache Jena undergoes time outs for all queries when the loaded data size is over 0.5 billion. Another fact that can be clearly observed is that EAGLE is orders of magnitude faster than the others. This is demonstrated by the case of Q7. Note that, this query implies a heavy computation on both spatial and temporal data. Additionally, it also requires that thhe results have to be ordered by time. As shown in [Figure 23](#sensors-19-04362-f023){ref-type="fig"}b, for executing a query over the dataset with 3.08 billion records, EAGLE performs Q7 much better (1420 ms), follows by GraphDB (7289 ms) and Virtuoso (10,455 ms). There can be several reasons for our impressive performance: (1) The effectiveness of the OpenTSDB time-series data structure such that data are already sorted by time during the loading process. Consequently, in the EAGLE system, for the query that has an order-by operator on date-time, the ordering operation cost, in this case, is eliminated. (2) The second reason again sheds light on the success of our data partitioning strategy and our row-key design so that the time cost for locating the required data partition and the data scan operation is significantly minimized. ### 7.2.3. Query Performance with Respect to Number of Clients {#sec7dot2dot3-sensors-19-04362} This experiment is designed to test the concurrent processing capability of EAGLE in a scenario where the system has to deal with a high volume of queries which are sent from multiple users. Rather than serving as a comparison with other stores, this experiment only focuses on analyzing EAGLE's query processing behavior when receiving concurrent queries from multiple clients. This experiment is performed as follows. In the first step, a dedicated script is built to randomly select and send queries to the system. The query parameters are also randomly generated. In the second step, we perform measurement runs with 10, 100, 250, 500 and 1000 clients concurrently. Finally, for each query, the query execution times are summarized to compute the average value. [Figure 24](#sensors-19-04362-f024){ref-type="fig"} reports the evaluation results. In general, the execution time for all queries linearly rises when more clients are added. It can be clearly observed that, when the number of clients increases from 250 to 1000, the query execution time increases dramatically. Firstly, this is due to the growing workload applied to the system. Secondly, by deeply analyzing the query cost breakdown, another possible reason is the inefficiency of our query plan cache mechanism. According to our observation, the query cache only works for the non-spatio--temporal queries. However, for the duplicated queries that share the same spatial, temporal and full-text filters, instead of reusing the cached query plan, the query optimizer has to re-generate a new query execution plan. For example, if there are 100 query instances of Q4 that have been sent from 100 clients, the query optimizer has to re-generate the query execution plan for Q4 for 100 times. Obviously, this leads to a dramatic increase for the total query execution time of Q4. As previously mentioned, the EAGLE's query processing engine has been implemented by extending the widely-known query engine, Jena ARQ, thus, its query cache is identical to the one in Jena ARQ. Unfortunately, the original one was only developed for standard SPARQL queries and does not work for the spatio--temporal queries. Moreover, we also learned that Jena ARQ's query cache does not work correctly with queries that share similar query patterns but different literals. This is also the case for our tested query patterns, in which the literals are randomly generated. We address this issue by proposing a novel learning approach for spatio--temporal query planning, that is described in \[[@B33-sensors-19-04362]\]. ### 7.2.4. System Scalability {#sec7dot2dot4-sensors-19-04362} In this experiment, we measure how EAGLE's performance scales when adding more nodes to the cluster. We vary the number of nodes in the Google Cloud cluster with 2, 4, 8, 12 nodes, respectively. [Figure 25](#sensors-19-04362-f025){ref-type="fig"} presents the average loading throughput of spatial, temporal and text data when increasing the number of nodes. The results reveal that the index performance linearly increases with the size of the cluster. This is because scaling out of the cluster causes the working data that needs to be indexed on each machine to be small enough to fit into main memory, which dramatically reduces the required disk I/O operations. In the following, we look at the query performance evaluation results shown in [Figure 26](#sensors-19-04362-f026){ref-type="fig"}. According to the results, the query execution times of Q2 and Q11 remain steady and are not affected by the cluster size. This is due to the fact that these queries are non-spatio--temporal queries and only query on the static dataset. Recall that, we store the static dataset on centralized storage (Apache Jena TDB), which is not scalable and is hosted on a single machine. Queries on static data are only executed on this machine. Therefore, it is understandable that, for the non-spatio--temporal queries being executed over the same dataset, scaling out of the cluster has no effect on their performance. Unlike Q2 and Q11, the performance of other queries scales perfectly with the cluster size. The results indicate that EAGLE has a considerable decrease in query execution time for mixed queries (Q4, Q7, Q10). Meanwhile, other queries have a slightly decreased query execution time. A representative example of mixed queries to demonstrate the scalability of EAGLE is Q4. This query required a heavy spatio--temporal computation on a large number of historical observation data items for a given year. However, along with the scaling out of the cluster, the amount of data processing for this query on each node was also reduced significantly. This explains the rapid drop in the query execution time from 1120 (ms) to 514 (ms) for Q4 when the cluster size scales out from two to 12 nodes, respectively. 7.3. Discussion {#sec7dot3-sensors-19-04362} --------------- We have presented an extensive quantitative evaluation of EAGLE's implementation and conducted a comparison with a top-performing RDF store running on a single node as well as a clustered RDF store. To conduct a precise performance comparison, we measure separately the loading performance of spatial, text and temporal data. The experimental results show that EAGLE performs better than other tested systems in terms of spatio--temporal and text data loading performance. For query performance, we have learned that EAGLE is highly efficient for queries that require heavy spatio--temporal computations on a large amount of historical data. However, it is slightly behind Virtuoso and GraphDB for non-spatio--temporal queries. This is understandable, as improving query performance on semantic data is not our main target. Another fact that should be highlighted is the effectiveness of our spatio--temporal partitioning strategy and OpenTSDB row-key scheme. This is evidenced by the evaluation results so that EAGLE has outstanding performance when applying the partitioning strategy. In the case where no partitioning strategy is used, it performs poorly and stops responding at a certain dataset size. For the scalability test, EAGLE scales perfectly with the cluster size. However, we also learned of some query planning issues that still exist in our system with respect to multiple concurrent queries. This challenge is separately addressed in our recent publication \[[@B33-sensors-19-04362]\]. 8. Conclusions and Future Work {#sec8-sensors-19-04362} ============================== The paper presented our solution, EAGLE, on how to scale the processing pipelines of linked sensor data. The solution includes a system design, a spatio--temporal storage model, a query language proposal, and an extensive set of experiments. The architecture of our design is based on NoSQL technologies, such as OpenTSDB and ElasticSearch, so that we can leverage their scalable indexing and querying components tailored for document, time series, and spatial data. Based on this architecture, we were able to isolate the I/O and processing bottlenecks with the storage model derived from spatio--temporal data patterns. Such patterns are the inputs that drive our data partitioning mechanism for enabling parallel writing and reading behaviors. Therefore, this mechanism makes EAGLE scale better than other state of the art systems as shown in our various experiments. The experiments show insightful quantitative figures on what the scalability issues of other systems and how our solution can overcome such issues. Furthermore, the paper also proposed a query language dedicated for linked sensor data by consolidating recent proposals for enabling spatio--temporal query patterns on SPARQL. For future work, we intend to integrate a distributed triple store within EAGLE to handle larger non-temporal-spatial data partitions. We are looking into both commercial and open-source clustered RDF stores such as CumulusRDF \[[@B45-sensors-19-04362]\], AllegroGraph \[[@B46-sensors-19-04362]\], Blazegraph (<http://www.blazegraph.com/>), etc. Furthermore, we are implementing some query optimization algorithms to speed up query performance based on machine learning \[[@B33-sensors-19-04362]\]. Another feature that we want to add in the next version of EAGLE is enabling Allen's temporal relations by developing additional temporal index algorithms. Finally, to highlight the advantages of EAGLE, further evaluations, such as concurrent read/write loads and detailed system scalibility, will be performed. Furthermore, the comparison of EAGLE's performance with other well-known distributed triple stores such as GraphDB, Neo4j (<https://neo4j.com/>), etc., is also needed. Funding acquisition, M.S.; supervision, J.G.B. and D.L.-P.; writing---original draft, H.N.M.Q.; writing---review and editing, H.N.M.Q., M.S., H.M.N., J.G.B. and D.L.-P. This paper has been funded in part by the Science Foundation Ireland under grant numbers SFI/12/RC/2289_P2 and SFI/16/RC/3918 (co-funded by the European Regional Development Fund), the ACTIVAGE project under grant number 732679, and the Marie Skล‚odowska-Curie Programme SMARTER project under grant number 661180. The authors declare no conflict of interest. sensors-19-04362-t0A1_Table A1 ###### Benchmark query characteristics on linked meteorological data. Query Parametric Spatial Filter Temporal Filter Text Search Group By Order By LIMIT Num. Variables Num. Triple Patterns ------- ------------ ---------------- ----------------- ------------- ---------- ---------- ------- ---------------- ---------------------- 1 โœ“ โœ“ 3 3 2 โœ“ 3 4 3 โœ“ โœ“ 7 8 4 โœ“ โœ“ โœ“ โœ“ โœ“ โœ“ 7 8 5 โœ“ โœ“ โœ“ โœ“ 4 5 6 โœ“ โœ“ 5 8 7 โœ“ โœ“ โœ“ โœ“ โœ“ 7 8 8 โœ“ โœ“ 3 4 9 โœ“ โœ“ 6 7 10 โœ“ โœ“ โœ“ โœ“ 7 9 11 โœ“ 2 1 12 โœ“ โœ“ 13 โœ“ โœ“ ![](sensors-19-04362-i018.jpg) ![](sensors-19-04362-i019.jpg) ![](sensors-19-04362-i020.jpg) ![](sensors-19-04362-i021.jpg) ![](sensors-19-04362-i022.jpg) ![](sensors-19-04362-i023.jpg) ![](sensors-19-04362-i024.jpg) ![](sensors-19-04362-i025.jpg) ![](sensors-19-04362-i026.jpg) ![](sensors-19-04362-i027.jpg) ![](sensors-19-04362-i028.jpg) ![](sensors-19-04362-i029.jpg) ![EAGLE's architecture.](sensors-19-04362-g001){#sensors-19-04362-f001} ![Transform spatial and text sub-graphs to ElasticSearch documents.](sensors-19-04362-g002){#sensors-19-04362-f002} ![OpenTSDB architecture.](sensors-19-04362-g003){#sensors-19-04362-f003} ![OpenTSDB *tsdb-uid* table.](sensors-19-04362-g004){#sensors-19-04362-f004} ![OpenTSDB tsdb table.](sensors-19-04362-g005){#sensors-19-04362-f005} ![OpenTSDB rowkey design for storing observation data.](sensors-19-04362-g006){#sensors-19-04362-f006} ![HBase tables.](sensors-19-04362-g007){#sensors-19-04362-f007} ![HBase table splitting.](sensors-19-04362-g008){#sensors-19-04362-f008} ![Spatio--temporal data partitioning strategy.](sensors-19-04362-g009){#sensors-19-04362-f009} ![An example of temporal information extraction process.](sensors-19-04362-g010){#sensors-19-04362-f010} ![TSDB put example.](sensors-19-04362-g011){#sensors-19-04362-f011} ![Delegating the evaluation nodes to different backend repositories.](sensors-19-04362-g012){#sensors-19-04362-f012} ![Average spatial data loading throughput.](sensors-19-04362-g013){#sensors-19-04362-f013} ![Spatial data loading time.](sensors-19-04362-g014){#sensors-19-04362-f014} ![Average full-text indexing throughput.](sensors-19-04362-g015){#sensors-19-04362-f015} ![Text data loading time.](sensors-19-04362-g016){#sensors-19-04362-f016} ![Average temporal indexing throughput.](sensors-19-04362-g017){#sensors-19-04362-f017} ![Temporal data loading time.](sensors-19-04362-g018){#sensors-19-04362-f018} ![Non spatio--temporal query execution time with respect to dataset size.](sensors-19-04362-g019){#sensors-19-04362-f019} ![Q1 execution time with respect to spatial dataset size (in logscale).](sensors-19-04362-g020){#sensors-19-04362-f020} ![Text-search query execution times with respect to dataset size.](sensors-19-04362-g021){#sensors-19-04362-f021} ![Temporal queries execution time with respect to dataset size (in logscale).](sensors-19-04362-g022){#sensors-19-04362-f022} ![Mixed queries execution time with respect to dataset size (in logscale).](sensors-19-04362-g023){#sensors-19-04362-f023} ![Average query execution time with respect to number of clients.](sensors-19-04362-g024){#sensors-19-04362-f024} ![Average index throughput by varying number of cluster nodes.](sensors-19-04362-g025){#sensors-19-04362-f025} ![Average query execution time by varying number of cluster nodes.](sensors-19-04362-g026){#sensors-19-04362-f026} sensors-19-04362-t001_Table 1 ###### OpenTSDB row key format. Element Name Size ---------------- --------- Metric UID 3 bytes Base-timestamp 4 bytes Tag names 3 bytes Tag values 3 bytes \... \... sensors-19-04362-t002_Table 2 ###### Spatial property functions. ------------------------------------------------------------------------------------------------------- Spatial Function Description ------------------------------------------------------------- ----------------------------------------- \<feature\> **geo:sfIntersects** (\<geo\> \|\ Find features that intersect the\ \<latMin\> \<lonMin\> \<latMax\> \<lonMax\> \[ \<limit\>\]) provided box, up to the limit. \<feature\> **geo:sfDisjoint** (\<geo\> \|\ Find features that intersect the\ \<latMin\> \<lonMin\> \<latMax\> \<lonMax\> \[ \<limit\>\]) provided box, up to the limit. \<feature\> **geo:sfWithin** (\<geo\> \|\ Find features that are within radius\ \<lat\> \<lon\> \<radius\> \[ \<units\> \[ \<limit\>\]\]) of the distance units, up to the limit. \<feature\> **geo:sfContains** \<geo\> \|\ Find features that contains the\ \<latMin\> \<lonMin\> \<latMax\> \<lonMax\> \[ \<limit\>\]) provided box, up to the limit. ------------------------------------------------------------------------------------------------------- sensors-19-04362-t003_Table 3 ###### Supported units. URI Description ------------------------------------ ------------- units:kilometre or units:kilometer Kilometres units:metre or units:meter Metres units:mile or units:statuteMile Miles units:degree Degrees units:radian Radians sensors-19-04362-t004_Table 4 ###### Temporal property functions. -------------------------------------------------------------------------------------------------------------- Temporal Function Description -------------------------------------------------------- ----------------------------------------------------- ?value **temporal:sum** (\<startTime\> \<endTime\>\ Calculates the sum of all reading data points\ \[\<'groupin' down sampling function\>\ from all of the time series or within the time\ \<geohash prefix\> \<observableProperty\>\]) span if down sampling. ?value **temporal:avg** (\<startTime\> \<endTime\>\ Calculates the average of all observation values\ \[\<'groupin' down sampling function\>\ across the time span or across multiple time series \<geohash prefix\> \<observableProperty\>\]) ?value **temporal:min** (\<startTime\> \<endTime\>\ Returns the smallest observation value from\ \[\<'groupin' down sampling function\>\ all of the time series or within the time span \<geohash prefix\> \<observableProperty\>\]) ?value **temporal:max** (\<startTime\> \<endTime\>\ Returns the largest observation value from\ \[\<'groupin' down sampling function\>\ all of the time series or within a time span \<geohash prefix\> \<observableProperty\>\]) ?value **temporal:values** (\<startTime\> \<endTime\>\ List all observation values from all of the\ \[\<'groupin' down sampling function\>\ time series or within the time span \<geohash prefix\> \<observableProperty\>\]) -------------------------------------------------------------------------------------------------------------- sensors-19-04362-t005_Table 5 ###### Dataset. Sources Sensing Objects Historical Data Archived Window ---------------- ----------------- ----------------- ----------------- Meteorological 26,000 3.7 B since 2008 Flight 317,000 317 M 2014--2015 Ship 20,000 51 M 2015--2016 Twitter -- 5 M 2014--2015 sensors-19-04362-t006_Table 6 ###### Categorizing queries based on their complexity. Category Non Spatio--Temporal Query Spatial Query Temporal Query Full-Text Search Query Mixed Query ----------- ---------------------------- --------------- ---------------- ------------------------ ----------------- **Query** Q2, Q11 Q1 Q5, Q6 Q8, Q9 Q3, Q4, Q7, Q10 [^1]: This paper is an extension version of the conference paper: Nguyen Mau Quoc, H; Le Phuoc, D.: "An elastic and scalable spatiotemporal query processing for linked sensor data", in proceedings of the 11th International Conference on Semantic Systems, Vienna, Austria, 16--17 September 2015.
--- -api-id: P:Windows.Storage.ApplicationData.SharedLocalFolder -api-type: winrt property --- <!-- Property syntax public Windows.Storage.StorageFolder SharedLocalFolder { get; } --> # Windows.Storage.ApplicationData.SharedLocalFolder ## -description Gets the root folder in the shared app data store. ## -property-value The file system folder that contains files. ## -remarks ### Accessing SharedLocalFolder SharedLocalFolder is only available if the device has the appropriate group policy. If the group policy is not enabled, the device administrator must enable it. From Local Group Policy Editor, navigate to Computer Configuration\Administrative Templates\Windows Components\App Package Deployment, then change the setting "Allow a Windows app to share application data between users" to "Enabled." After the group policy is enabled, SharedLocalFolder can be accessed. ## -examples ## -see-also
I was stunned to read this from Time Magazine: The Centers for Disease Control and Prevention reported in May that births to unmarried women have reached an astonishing 39.7%. How much does this matter? More than words can say. There is no other single force causing as much measurable hardship and human misery in this country as the collapse of marriage. It hurts children, it reduces mothersโ€™ financial security, and it has landed with particular devastation on those who can bear it least: the nationโ€™s underclass. I checked it out at the CDC; it isnโ€™t a typo: The trend in unmarried childbearing was fairly stable from the mid-1990s to 2002, but has shown a steep increase between 2002 and 2007. Between 1980 and 2007, the proportion of births to unmarried women in the United States has more than doubled, from 18 percent to 40 percent. Iceland (66 percent), Sweden (55 percent), Norway (54 percent), France (50 percent), Denmark (46 percent) and the United Kingdom (44 percent) all have higher proportions of births to unmarried mothers than the United States. Ireland (33 percent), Germany and Canada (30 percent), Spain (28 percent), Italy (21 percent) and Japan (2 percent) have lower percentages than the United States. Another tidbit: The highest rates of out of wedlock births are in D.C. (59%) and Mississippi (54%) and the lowest rate is in Utah (20%). The new equilibrium we are moving toward seems a very different world. Women free to pick a dad without expecting him to stay as a long term helper probably pick sexier men. This should create more inequality in male access to women for sex and kids, and give men more free time to compete to be the few super-sexy super-dads. Women would get to have kids fathered by sexier men, but at the expense of raising those kids with less male help. More men would be sex-failures with more free time to pursue long-shot plans to reverse their fortunes, and without wives to moderate them. How many of those plans will be peaceful? I guess this helps somewhat to explain the explicitly sex-aggressive men I see more of these days. When I wrote: If you donโ€™t signal your continued love she may well conclude that your love has in fact changed. โ€œMaster Dogenโ€ responded: Hanson โ€ฆ seems to be thoroughly trained in thinking that the best way to long-term health in a relationship with a woman is to signal โ€œcaring more than everyone elseโ€ and โ€œgiving gifts,โ€ etc. This, of course, is the constant position of a supplicant. โ€ฆ I advocate a very different way of dealing with a woman โ€ฆ So letโ€™s assume you are an alpha, and youโ€™ve trained your woman to supplicate you rather than the other way around. โ€ฆ You must continue signaling your dominance: gently pull her hair when you go in for a kiss, raise you voice sternly when she steps out of line, flirt shamelessly with other women in public. I might not like it, but I canโ€™t argue that the future doesnโ€™t hold a lot more of this. GD Star Rating loading...
Good Boy Gone Bad Liam was a good boy until he met a girl. She was bad and wanted Liam all for her self. Her name was Rachel. She went to a concert and fell into Liam and they started hanging out. Liam got worse everday. Going out to party's and having fun. Getting aresseted a lot. His best Mate(Harry) and Harry's Wife,(Sarah) told him that he is changing a lot and he needs to stop hanging with Rachel but Liam didn't listen because he loved her! Read on to find out what happens...
Vinylglycine (2-aminobut-3-enoic acid)) (Berkowitz et al., Tetrahedron: Asym. (2006) 12: 869) is a natural, non-protein ฮฑ-amino acid and irreversible inhibitor of enzymes that use pyridoxal phosphate (PLP) as a cofactor, such as a alanine racemase, aspartate aminotransferase, and ฮฑ-ketoglutarate dehydrogenase (Lacoste et al., (1988) Biochem. Soc. Trans. 16: 606; Rando R. R. (1974) Biochemistry 13: 3859; Lai & Cooper (1986) J. Neurochem. 47: 1376). As a suicide substrate, research has centered on identifying additional natural and synthetic ฮฒ,ฮณ-olefinic amino acids capable of selectively deactivating enzymes. In addition, protected forms of vinylglycine have been useful in the synthesis of metabotropic glutamate receptors agonists (Selvam et al., (2007) J. Med. Chem. 50: 4656) poly-ฮณ-glutamate synthetase inhibitors (Valiaeva et al., (2001) J. Org. Chem. 66: 5146), and the antitumor antibiotic (+)-FR900482 (Paleo et al., (2003) J. Org. Chem. 68: 130). Traditionally, the methods of choice to prepare L-vinylglycine have been the pyrolysis of protected methionine sulfoxide (MetO) (Afzali-Ardakani & Rapoport (1980) J. Org. Chem. 45: 4817) and thermolysis of aryl selonoxides obtained from either protected L-glutamate (Hanessian & Sahoo (1984) Tetrahedron Lett. 25: 1425), L-homoserine (Pelliccciari et. al. (1988) Synth. Commun. 69: 7982), or L-homoserine lactone (Berkowitz & Smith (1996) Synthesis 39). For multi-gram syntheses, the MetO pyrolysis approach is most commonly implemented. However, due to the high vacuum (โ‰ฆ3 mm Hg) and temperature (>150ยฐ C.) requirements, isomerization is a consistent problem for the reaction. The migratory occurrence to the more thermally stable ฮฒ-methyldehydroalanine is further enhanced by the acidity of the ฮฑ-proton in N,O-protected forms of vinylglycine. The isomer forms quantitatively in the presence of triethylamine or N-methylmorpholine (Afzali-Ardakani & Rapoport (1980) J. Org. Chem. 45: 4817) and it is likewise believed decomposition during silica purification contributes to a optimized yield of 60% (Carrasco et al., (1992) Org. Synth. 70: 29). Because of the difficulty of isolating the ฮฑ,ฮฒ-isomer from protected vinylglycines by chromatography and the desire to find a non-pyrolytic large scale approach, it is desirable for alternative sulfinyl substituents that would syn-eliminate at temperatures below 150ยฐ C.
Prince Henri of Orlรฉans Prince Henri of Orlรฉans (16 October 1867 โ€“ 9 August 1901) was the son of Prince Robert, Duke of Chartres, and Princess Franรงoise of Orlรฉans. Biography Henri, the second eldest son and third child of Prince Robert, Duke of Chartres, was born at Ham, London on 16 October 1867. In 1889, at the instance of his father, who paid the expenses of the tour, he undertook, in company with Gabriel Bonvalot and Father Constant de Deken (1852-1896), a journey through Siberia to French Indochina. In the course of their travels they crossed the mountain range of Tibet and the fruits of their observations, submitted to the Geographical Society of Paris (and later incorporated in De Paris au Tonkin ร  travers le Tibet inconnu, published in 1892), brought them conjointly the gold medal of that society. In 1892 the prince made a short journey of exploration in East Africa, and shortly afterwards visited Madagascar, proceeding thence to Tongkin in today Vietnam. In April 1892 he visited Luang Prabang in Laos. It brings him to writing a letter to "Politique Coloniale" in Januari 1893. From this point he set out for Assam, and was successful in discovering the source of the Irrawaddy River, a brilliant geographical achievement which secured the medal of the Geographical Society of Paris and the Cross of the Legion of Honour. In 1897 he revisited Abyssinia, and political differences arising from this trip led to a duel with Vittorio Emanuele, Count of Turin. While on a trip to Assam in 1901, he died at Saigon on the 9th of August. Prince Henri was a somewhat violent Anglophobe, and his diatribes against Great Britain contrasted rather curiously with the cordial reception which his position as a traveller obtained for him in London, where he was given the gold medal of the Royal Geographical Society. Duel In 1897, in several articles for Le Figaro, Prince Henri described the Italian soldiers being held captive in Ethiopia, during the first First Italoโ€“Ethiopian War, as cowards. Prince Vittorio Emanuele thus challenged him to a duel. The sword was agreed upon as the weapon of choice, as the Italians thought that duel with pistols, favored by the French, was worthy of betrayed husbands, not of princes of royal blood. The duel with swords, which lasted 26 minutes, took place at 5:00 am on 15 August 1897, in the Bois de Marechaux at Vaucresson, France. Vittorio Emanuele defeated Prince Henri after 5 reprises. The "Monseigneur" Henri received a serious wound to his right abdomen, and the doctors of both parties considered the injury serious enough to put him in a state of obvious inferiority, causing the end of the duel, and making the Count of Turin famous in Europe. In popular culture Literature Race to Tibet by Sophie Schiller (2015) Ancestry Notes References Further reading Category:House of Orlรฉans Category:People from Ham, London Category:Princes of France (Orlรฉans) Category:French explorers Category:1867 births Category:1901 deaths Category:Duellists
[IMAGE] [IMAGE] [IMAGE] Texas Monthly [IMAGE] [IMAGE] You have received the following link from pfreyre@coral-energy.com: [IMAGE] Click the following to access the sent link: [IMAGE]Texas Monthly November 2001: How Enron Blew It SAVE THIS link FORWARD THIS link Please note, the sender's email address has not been verified. Get your EMAIL THIS Browser Button and use it to email information from any Web site. [IMAGE] [IMAGE]
Does Nutrisystem Diet Work? My Review โ€“ My Story They suggested I change things up a bit. Nutrisystem after 8 weeks By week ten, I was still enjoying all the food and health benefits of Nutrisystem. They can be judgmental as well. The day after that Easter, I was nervous about stepping on the scales. I headed to my local Walmart and purchased a Nutrisystem five day weight loss kit which included fifteen entrees and five desserts. I don't find myself eating because I'm bored anymore. I find that I want to eat only very small portions. The only thing that has been difficult is getting used to my new appetite. I'm so excited how well Nutrisystem works that I want to share it with everyone. I lost 50 lb and have gone from a size 12 to a size 6! Read my review.
Despite three Lakers on the roster, the D-Fenders fell on Sunday night in El Segundo to the Austin Spurs, 116-105. Earlier in the day, the Lakers assigned Ryan Kelly, Tarik Black and Anthony Brown to the teamโ€™s NBA Development League affiliate. None are regular rotation players for the Lakers. Kelly led all scorers with 34 points, hitting 10 of 18 shots and 13 of 17 free throws. He also had nine rebounds, four assists and three steals. Black reached a double-double with 19 points and 11 rebounds, but fouled out with five turnovers. Brown scored 10 with five points and four assists, hitting just four of 15 from the field. Guard Michael Frazier, who went through training camp with the Lakers, scored 13 off the bench. Robert Upshaw, the undrafted center who was cut before the start of the season, did not play because of coachโ€™s decision. The Spurs, the affiliate of the San Antonio Spurs, were led by Keifer Sykes and reserve Cady Lalanne, who both scored 27 points. The D-Fenders were only out-scored by a single point with Kelly on the floor. The Spurs generated a 10-point advantage over the 10 minutes Kelly was on the bench. With the D-Fenders off until Friday, the trio of Lakers are likely to recalled in time to host the Milwaukee Bucks on Tuesday at Staples Center. Email Eric Pincus at eric.pincus@gmail.com and follow him on Twitter @EricPincus
Q: Non Linear Diophantine Equation in Three Variables Find all positive integer solution to $abc-2=a+b+c$. A: None of the variables can be greater than $4$, and at least two of them have to be greater than $1$. Also, by looking at the equation modulo 2, you can not have exactly one of them being odd. Finally, this equation is symmetric in its variables, which means I just have to say which three numbers I pick, not which one I assign to which variable. A brute force tactic doesn't take too long from here. First we go with all the variables being odd. There are only two cases conforming to the list of demands in the last paragraph, and that is $(3, 3, 1)$ and $(3, 3, 3)$. The first set does solve the equation, the second one doesn't. Next is two of them being odd. Setting the even number to $4$ makes the left side too large, so we need the even number to be $2$. The only set left is then $(2, 3, 1)$, which does not solve the equation. Lastly, the case of all the digits even. There are four of these sets, depending on how many $2$s and $4$s we use. The sets $(4, 4, 4)$, $(4, 4, 2)$ and $(4, 2, 2)$ don't lead to a solution, but the set $(2, 2, 2)$ does. All in all we have 2 sets of solution values, which can be distributed over the variables in a total of four different ways. Edit It has been made known to me, thanks to Ross Pure, that I was a bit hasty to exclude the case of one variable being equal to $5$. So there is a set $(5, 2, 1)$, which generates another 6 solutions, depending on which number you give to which variable.
Q: safety not guaranteed in C buffer calloc or malloc within a function? This is a question about how to have "belt and suspenders" safety within a simple bit of C code. The old and somewhat beaten to death issue being how to ensure that one may move data into a buffer within some called function without worry that the heap memory is corrupted after return. There have been a few great things written on this site about the topic and, at least for me, it still isn't clear where we get real total safety. So I wrote the following : /********************************************************************* * The Open Group Base Specifications Issue 6 * IEEE Std 1003.1, 2004 Edition *********************************************************************/ #define _XOPEN_SOURCE 600 #include <ctype.h> #include <errno.h> #include <locale.h> #include <stddef.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int use_buffer( const char *strin, char **strout, size_t bufsize ) { size_t len; int result = -1; printf ( "in use_buffer() we have address of strout = %p\n", &strout ); printf ( " and that contains an address of %p\n", strout ); printf ( " which points to a buffer address %p\n", *strout ); /* check for null data */ if ( ( strin == NULL ) || ( *strout == NULL ) ) return result; /* check for zero length data */ if ( strlen(strin) == 0 ) return result; /* ensure we have a non-zero size buffer to write to ? * belt and suspenders safety here is not assured. We have * no way to know if the calling routine actually did * allocate memory of size bufsize. */ len = strlen(strin); if ( bufsize < len ) return result; strncpy ( *strout, strin, len ); return len; } int main ( int argc, char *argv[] ) { char *some_buffer; int retval; size_t buflen; if ( argc < 2 ) { printf ( "usage: %s somestring\n", argv[0] ); return ( EXIT_FAILURE ); } buflen = (size_t) ( 4 * 4096 ); some_buffer = calloc( buflen, sizeof( unsigned char) ); if ( some_buffer == NULL ) { perror ( "Could not calloc a 16Kb byte buffer." ); return ( EXIT_FAILURE ); } printf ( "main() has a 16Kb buffer ready at address = %p\n", &some_buffer ); retval = use_buffer( argv[1], &some_buffer, buflen ); if ( retval > 0 ) printf ( "Maybe we have %i bytes copied into a buffer.\n", retval ); free ( some_buffer ); some_buffer = NULL; /* belt and suspenders */ return ( EXIT_SUCCESS ); } Compile and run that and I see this : $ ./use_buffer "foo of the bar" main() has a 16Kb buffer ready at address = ffffffff7ffff710 in use_buffer() we have address of strout = ffffffff7ffff630 and that contains an address of ffffffff7ffff710 which points to a buffer address 100101440 Maybe we have 14 bytes copied into a buffer. One of those addresses really does not look right. One of those is just not the same. Really, it is hard to know why the first three addresses are way off wildly in some other memory region whereas the last one looks to be some local heap memory perhaps? Is the above method absolutely belt and suspenders safe? I mean that the function is going to be working with a buffer that we know has been pre-allocated and that there is no way for the function to screw it up. I doubt that the function can call free() on the adress stored within strout. That would be like checking into a hotel with a room key and then setting fire to the room. While standing in it. I guess it could be done .. but would be crazy. So there are two questions here : (1) is there a way for the function to verify the allocated buffer size? Even if the method is to trigger a memory violation. And then (2) is there any safety in passing a null pointer to the function and then allowing the function to calloc/malloc the buffer as required and pass back the address ? I suspect that (2) has been beaten to death and the answer is "safety not guaranteed". ( sidenote : damn good movie by the way. ) Consider this code bit : /********************************************************************* * The Open Group Base Specifications Issue 6 * IEEE Std 1003.1, 2004 Edition *********************************************************************/ #define _XOPEN_SOURCE 600 #include <ctype.h> #include <errno.h> #include <locale.h> #include <stddef.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int bad_buffer( const char *strin, char **strout ) { size_t len; int result = -1; char *local_buffer; /* check for null data */ if ( strin == NULL ) return result; /* check for zero length data */ len = strlen(strin); if ( len == 0 ) return result; /* * safety not guaranteed ? */ local_buffer = calloc( len+1, sizeof( unsigned char) ); printf ( " in bad_buffer() we have local_buffer at = %p\n", &local_buffer ); strncpy ( local_buffer, strin, len ); *strout = local_buffer; return len; } int main ( int argc, char *argv[] ) { char *some_buffer; int retval; size_t buflen; if ( argc < 2 ) { printf ( "usage: %s somestring\n", argv[0] ); return ( EXIT_FAILURE ); } retval = bad_buffer( argv[1], &some_buffer ); printf ( "in main() we now have some_buffer at addr %p\n", some_buffer ); if ( retval > 0 ) printf ( "Maybe we have %i bytes copied into a buffer.\n", retval ); printf ( "main() says the buffer contains \"%s\"\n", some_buffer ); free ( some_buffer ); /* really ? main() did not allocate this !? */ some_buffer = NULL; return ( EXIT_SUCCESS ); } when I compile and run that I see : $ ./bad_buffer foobar in bad_buffer() we have local_buffer at = ffffffff7ffff6b0 in main() we now have some_buffer at addr 100101330 Maybe we have 6 bytes copied into a buffer. main() says the buffer contains "foobar" Something seems spooky here. The function did the calloc and then stuffed the address of the buffer into the address within strout. So strout was a pointer to a pointer and so I am fine with that. What scares me is that the memory that was allocated by the function has no right or reason to be considered safe after it is done and we are back in main(). So question number (2) stands as "is there any safety in allowing the function to calloc/malloc the buffer needed?" A: One of those addresses really does not look right. One of those is just not the same. Really, it is hard to know why the first three addresses are way off wildly in some other memory region whereas the last one looks to be some local heap memory perhaps? some_buffer (or its alias strout) is a local variable stored in the stack of main and is pointing to an address in the heap. So they are addresses of different memory areas A: $ ./use_buffer "foo of the bar" main() has a 16Kb buffer ready at address = ffffffff7ffff710 in use_buffer() we have address of strout = ffffffff7ffff630 and that contains an address of ffffffff7ffff710 which points to a buffer address 100101440 Maybe we have 14 bytes copied into a buffer. Here 3 first values are in stack. Looking at your code, the middle one is actually address of the local argument at that function, while 1st and 3rd are addresses of local variables in main(). Note how stack grows down, so it's positioned at a high address, and called function arguments are lower than variables of the calling function. 4rd value is then something of a special case, because it's address of an argv string. Those strings are either global variables (in their own section of program address space), or they can even be at OS specific special addresses not near anything else. $ ./bad_buffer foobar in bad_buffer() we have local_buffer at = ffffffff7ffff6b0 in main() we now have some_buffer at addr 100101330 Maybe we have 6 bytes copied into a buffer. main() says the buffer contains "foobar" And here again, first address is in stack, address of local variable in a function. 2nd address is memory in heap, value of a pointer in main(). And yes, at a glance your code is safe. You seem to be confusing pointer value with address of the pointer variable. Consider this: char *p1 = malloc(10); char *p2 = p1; printf("%p %p %p %p\n", &p1, &p2, p1, p2); Above will print something like ffffffff7ffff710 ffffffff7ffff702 100101440 100101440 First is address of variable p1, which is local variable here and in stack. 2nd is adddress of p2, also local variable and in stack. Then two last address as return value of malloc call, assigned to p1 and copied to p2. That allocated block will remain until you free it, no matter how many times you pass the address around, or even if you lose the address (in which case you have memory leak). When you free it, any pointers which still point to that area become dangling pointers, and should not be dereferenced.
Education for underprivileged girls In India, 1 out of every 6 girls will not reach their 15th birthday 50% of girls in India drop out before reaching grade 9 That is why we at One Life to Love identify the most needy girls, encourages their parents to continue sending them to school, and sponsor all their education expenses. Today OL2L sponsors the education of 400 underprivileged children. โ€œBefore One Life to Love workers visited our home, we had planned to get our daughter engaged to be married to a man from this slum as we had no other option for her. One Life to Love has given our daughter an opportunity that we could not provide. We have high hopes for our daughterโ€™s future now.โ€- Harshish, 48-year-old rickshaw puller
/* * @BEGIN LICENSE * * Psi4: an open-source quantum chemistry software package * * Copyright (c) 2007-2019 The Psi4 Developers. * * The copyrights for code used from other parties are included in * the corresponding files. * * This file is part of Psi4. * * Psi4 is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation, version 3. * * Psi4 is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License along * with Psi4; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. * * @END LICENSE */ /*! \file \ingroup DETCI \brief Enter brief description of file here */ /* ** PRINTING.C ** ** File contains routines associated with printing CI space, vectors, etc. ** ** C. David Sherrill ** Center for Computational Quantum Chemistry ** University of Georgia */ #include <cstdio> #include <cmath> #include <cstring> #include <sstream> #include <cctype> // for toupper() #include "psi4/libciomr/libciomr.h" #include "psi4/libqt/qt.h" #include "psi4/detci/structs.h" #include "psi4/detci/ciwave.h" namespace psi { namespace detci { #define FLAG_NONBLOCKS #define MIN_COEFF 1.0E-13 std::string orb2lbl(int orbnum, struct calcinfo *Cinfo, int *orbs_per_irr); extern int str_rel2abs(int relidx, int listnum, struct olsen_graph *Graph); /* ** PRINT_VEC() ** ** Print the Most Important Determinants in the CI vector ** David Sherrill, February 1995 */ void CIWavefunction::print_vec(size_t nprint, int *Ialist, int *Iblist, int *Iaidx, int *Ibidx, double *coeff) { int Ia_abs, Ib_abs; /* print out the list of most important determinants */ outfile->Printf("\n The %zu most important determinants:\n\n", nprint); for (size_t i = 0; i < nprint; i++) { if (std::fabs(coeff[i]) < MIN_COEFF) continue; Ia_abs = str_rel2abs(Iaidx[i], Ialist[i], AlphaG_); Ib_abs = str_rel2abs(Ibidx[i], Iblist[i], BetaG_); #ifdef FLAG_NONBLOCKS int found_inblock = 0; for (size_t j = 0, found_inblock = 0; j < H0block_->size; j++) { if (Iaidx[i] == H0block_->alpidx[j] && Ibidx[i] == H0block_->betidx[j] && Ialist[i] == H0block_->alplist[j] && Iblist[i] == H0block_->betlist[j]) { found_inblock = 1; break; } } outfile->Printf(" %c", found_inblock ? ' ' : '*'); #endif outfile->Printf("%4zu %10.6lf (%5d,%5d) ", i + 1, coeff[i], Ia_abs, Ib_abs); std::string configstring(print_config(AlphaG_->num_orb, AlphaG_->num_el_expl, BetaG_->num_el_expl, alplist_[Ialist[i]] + Iaidx[i], betlist_[Iblist[i]] + Ibidx[i], AlphaG_->num_drc_orbs)); outfile->Printf("%s\n", configstring.c_str()); } /* end loop over important determinants */ outfile->Printf("\n"); } /* ** PRINT_CONFIG() ** ** Function prints a configuration, given a list of ** alpha and beta string occupancies. ** ** David Sherrill, February 1995 ** */ std::string CIWavefunction::print_config(int nbf, int num_alp_el, int num_bet_el, struct stringwr *stralp, struct stringwr *strbet, int num_drc_orbs) { int j, k; int afound, bfound; std::ostringstream oss; /* loop over orbitals */ for (j = 0; j < nbf; j++) { std::string olabel(orb2lbl(j + num_drc_orbs, CalcInfo_, nmopi_)); /* get label for orbital j */ for (k = 0, afound = 0; k < num_alp_el; k++) { if ((stralp->occs)[k] > j) break; else if ((stralp->occs)[k] == j) { afound = 1; break; } } for (k = 0, bfound = 0; k < num_bet_el; k++) { if ((strbet->occs)[k] > j) break; else if ((strbet->occs)[k] == j) { bfound = 1; break; } } if (afound || bfound) oss << olabel; if (afound && bfound) oss << "X "; else if (afound) oss << "A "; else if (bfound) oss << "B "; } /* end loop over orbitals */ return oss.str(); } /* ** orb2lbl(): Function converts an absolute orbital number into a ** label such as 4A1, 2B2, etc. ** ** Parameters: ** orbnum = orbital number in CI order (add frozen core!) ** label = place to put constructed label ** ** Needs Global (CalcInfo): ** orbs_per_irrep = number of orbitals per irrep ** order = ordering array which maps a CI orbital to a ** Pitzer orbital (the opposite mapping from the ** "reorder" array) ** irreps = number of irreducible reps ** nmo = num of molecular orbitals ** labels = labels for all the irreps ** ** Notes: ** If there are frozen core (FZC) orbitals, they are not included in the ** CI numbering (unless they're "restricted" or COR orbitals). This ** is bothersome because some of the arrays constructed in the CI program ** do start numbering from FZC orbitals. Thus, pass orbnum as the CI ** orbital PLUS any frozen core orbitals. ** ** Updated 8/16/95 by CDS ** Allow it to handle more complex spaces...don't assume QT orbital order. ** It was getting labels all mixed up for RAS's. */ std::string orb2lbl(int orbnum, struct calcinfo *Cinfo, int *orbs_per_irr) { int ir, j, pitzer_orb, rel_orb; /* get Pitzer ordering */ pitzer_orb = Cinfo->order[orbnum]; if (pitzer_orb > Cinfo->nmo) { outfile->Printf("(orb2lbl): pitzer_orb > nmo!\n"); } for (ir = 0, j = 0; ir < Cinfo->nirreps; ir++) { if (orbs_per_irr[ir] == 0) continue; if (j + orbs_per_irr[ir] > pitzer_orb) break; else j += orbs_per_irr[ir]; } rel_orb = pitzer_orb - j; if (rel_orb < 0) { outfile->Printf("(orb2lbl): rel_orb < 0\n"); } else if (rel_orb > orbs_per_irr[ir]) { outfile->Printf("(orb2lbl): rel_orb > orbs_per_irrep[ir]\n"); } std::ostringstream oss; oss << rel_orb + 1 << Cinfo->labels[ir]; return oss.str(); } /* ** lbl2orb(): Function converts a label such as 4A1, 2B2, etc., to ** an absolute orbital number. The reverse of the above function ** orb2lbl(). ** ** Parameters: ** orbnum = orbital number in CI order (add frozen core!) ** label = place to put constructed label ** ** Returns: ** absolute orbital number for the correlated calc (less frozen) ** */ // int lbl2orb(char *orbstring) //{ // // int ir, i, j, pitzer_orb, rel_orb, corr_orb; // char *s, *t; // char orblbl[10]; // // sscanf(orbstring, "%d%s", &rel_orb, orblbl); // // /* get the irrep */ // for (i=0,ir=-1; i<CalcInfo.nirreps; i++) { // s = orblbl; // t = CalcInfo.labels[i]; // j = 0; // while ((toupper(*s) == toupper(*t)) && (j < strlen(orblbl))) { // s++; // t++; // j++; // } // if (j == strlen(orblbl)) { // ir = i; // break; // } // } // // if (ir == -1) { // outfile->Printf( "lbl2orb: can't find label %s!\n", orblbl); // return(0); // } // // /* get Pitzer ordering */ // for (i=0,pitzer_orb=0; i<ir; i++) { // pitzer_orb += CalcInfo.orbs_per_irr[i]; // } // pitzer_orb += rel_orb - 1; /* 1A1 is orbital 0 in A1 stack ... */ // // /* get correlated ordering */ // corr_orb = CalcInfo.reorder[pitzer_orb]; // corr_orb -= CalcInfo.num_drc_orbs; // // if (corr_orb < 0 || corr_orb > CalcInfo.num_ci_orbs) { // outfile->Printf( "lbl2orb: error corr_orb out of bounds, %d\n", // corr_orb); // return(0); // } // // return(corr_orb); // //} // void eivout_t(double **a, double *b, int m, int n) // { // int ii,jj,kk,nn,ll; // int i,j,k; // // ii=0;jj=0; // L200: // ii++; // jj++; // kk=10*jj; // nn=n; // if (nn > kk) nn=kk; // ll = 2*(nn-ii+1)+1; // outfile->Printf("\n"); // for (i=ii; i <= nn; i++) outfile->Printf(" %5d",i); // outfile->Printf("\n"); // for (i=0; i < m; i++) { // outfile->Printf("\n%5d",i+1); // for (j=ii-1; j < nn; j++) { // outfile->Printf("%12.7f",a[j][i]); // } // } // outfile->Printf("\n"); // outfile->Printf("\n "); // for (j=ii-1; j < nn; j++) { // outfile->Printf("%12.7f",b[j]); // } // outfile->Printf("\n"); // if (n <= kk) { // return; // } // ii=kk; goto L200; //} /* ** PRINT_CIBLK_SUMMARY() ** ** C. David Sherrill ** April 1996 ** */ // void print_ciblk_summary(std::string out) //{ // int blk; // // outfile->Printf( "\nCI Block Summary:\n"); // for (blk=0; blk<CIblks.num_blocks; blk++) { // outfile->Printf("Block %3d: Alp=%3d, Bet=%3d Size = %4d x %4d = %ld\n", // blk, CIblks.Ia_code[blk], CIblks.Ib_code[blk], // CIblks.Ia_size[blk], CIblks.Ib_size[blk], // (size_t) CIblks.Ia_size[blk] * // (size_t) CIblks.Ib_size[blk]); // } //} } } // namespace psi
Q: Is it possible in Scala to specify a constraint on a generic type ฯ„ such that ฯ„ <: ฯƒ โˆง ฯ„ โ‰  ฯƒ? I have a type: class ฯƒ Now I want to define a type: class ฯ…[ฯ„ <: ฯƒ] With the additional requirement that ฯ„ โ‰  ฯƒ. Is this possible at all? A: Using Miles Sabin's answer here : trait =!=[A, B] implicit def neq[A, B] : A =!= B = null implicit def neqAmbig1[A] : A =!= A = null implicit def neqAmbig2[A] : A =!= A = null then : scala> class A defined class A scala> class B[C <: A](implicit ev: C =!= A) defined class B scala> class D extends A defined class D scala> new B[D]() // OK, D is a subtype of A res4: B[D] = B@4d8c463c scala> new B[A]() // Error, A =:= A <console>:15: error: ambiguous implicit values: both method neqAmbig1 of type [A]=> =!=[A,A] and method neqAmbig2 of type [A]=> =!=[A,A] match expected type =!=[A,A] scala> class E defined class E scala> new B[E]() // Error, E is not a subtype of A <console>:15: error: type arguments [E] do not conform to class B's type parameter bounds [C <: A]
Q: PhoneGap Forms add 2 numbers from Input Range sliders I'm really new to JS and phonegap and I'm pretty stuck on adding 2 numbers and multiply it by a frequency selector RRfreq. I can't seem to get any of the code to work I was just wondering if it was possible to read input range as an integer or floating point number. Also, am I ready RRfreq correctly or do I need to code in something to read exactly what is selected. Here the JavaScript: <script type="text/javascript" language="JavaScript"> function calcNumbers() { <!-- window.location='#stuff'; --> var hrs = document.getElementById("NumHours").value; var peeps = document.getElementById("NumPeople").value; var freq = document.getElementById("RRfreq").value; var ansD=document.getElmentById("answer"); ansD.value= hrs * peeps; var x = (NumHours + NumPeople) * RRFreq; document.getElementById("demo").innerHTML = x; alert('testing'); } </script> And here's what I have for my PhoneGap HTML: <div id="Small Tank" data-role="page" data-theme="f"> <header data-role="header" data-position="fixed" data-id="appHeader"> <h1>Tanks</h1> <a href="#homeScreen" class="ui-btn ui-icon-carat-l ui-btn-icon-notext ui-btn-left ui-nodisc-icon ui-alt-icon">Back</a> </header> <div data-role="content"> <h1 class="center">Tanks</h1> <!-- <div id="form"> --> <!-- <form id="calcUsage" data-ajax='false'> --> <!-- <form method=post action="" id="calcUsage">--> <div data-role="fieldcontain"> <label for="PeopleSlider">Number of People:</label> <input type="range" name="NumPeople" id="NumPeople" value="100" min="100" max="200" /> <p>&nbsp;</p> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-type="horizontal"> <legend>Frequency of RR Usage in Hours</legend> <input name="RRfreq" type="radio" id="RRfreq_1" title="1" value="1" /> <label for="RRfreq_1">1.0</label> <input name="RRfreq" type="radio" id="RRfreq_15" title="15" value="1.5" /> <label for="RRfreq_15">1.5</label> <input name="RRfreq" type="radio" id="RRfreq_2" title="2" value="2" /> <label for="RRfreq_2">2.0</label> </fieldset> <p>&nbsp;</p> </div> <div data-role="fieldcontain"> <label for="TimeSlider">Number of Hours:</label> <input type="range" name="NumHours" id="NumHours" value="1" min="1" max="8" /> </div> <input type="submit" data-theme="f" name="submit" value="Calculate Usage" onClick="javascript:calcNumbers()"> Answer= <input type="text" id="answer" name="answer" value="" /> <!-- </form> --> <!-- </div> --> <!-- form --> </div> <!-- content --> I added in a redirect URL to see if I was even stepping into my function and I am-- I can't seem to get the alert message to pop-up though I think my pop-up browser might be stopping it. Any advice on debugging JavaScript running in PhoneGap would really be appreciated :) A: In general, the logic is fine. There are just a few things you need to correct on your HTML and Javascript to make this work. On the HTML Select a default radio button selected (this will prevent your calculation to crash because it will contain a value by default, is a good practice) Change the type="submit" of your calculate button to type="button" this will prevent the page from reloading (if your intention is to make a post then leave it as type="submit" On the Javascript Here you can improve many things. I creade a JSFiddle for you to see how it works. Please read the comments on the Javascript. https://jsfiddle.net/saq8yyv0/14/ Advice for debugging You can easily debug Javascript running on your browser even if this is a phonegap application. I prefer Chrome for this. Just open the developer tools (ctrl+shift+i) and select the sources tab. Here you can find some documentation about it. Here is a more extended list on tools and techniques to debug phonegap applications but in general you can do almost all your debugging using chrome developer tools
Does this woman seem insincere to you? How can a hunger strike be "disingenuous"? That's how a spokeswoman for Station Casinos described a group of Station workers and their supporters' seven-day fast. Station Casino workers have gone without raises since 2007, faced cuts to benefits, and seen more than 2,800 of their coworkers laid off. Since they started attempting to organize, they have faced a campaign of illegal intimidation and firings by the company, with firings specifically targeting Latino workers. The current fast ramps up their activism and organizing efforts. Station's reason for calling the fast "disingenuous" seems to be that the action is designed to draw attention to the workers' efforts to unionize. Well, yeah. Obviously a hunger strike or fast undertaken as part of an activist campaign is done to publicize the campaign. But "disingenuous" is defined as "lacking sincerity." You don't subsist on water for a week in an insincere way. In fact, if you're willing to do that, you're almost by definition extremely sincere. (For the record, I'm not a fan of the hunger strike of indefinite length. Those typically start out as "we're not eating until we get what we want" and end with "or maybe we are ..." But a seven-day, defined fast demonstrates serious commitment without setting up failure.) By this standard, anything other than perfectly targeted direct action, and certainly anything with a public relations intent, is "disingenuous." Tell Station Casinos to stop firing Latino workers and respect the rights of its workers to join a union.
Q: Dynamic generation of arrays I need to generate my array according to the commands gathered from the user. For example if the user give input "first type of array" my array will be processors = new Processor[] {new object_a(),object_b(2,3),object_c()}; else if the user give input "second type of array" my array will be processors = new Processor[] {new object_e(),object_f(3),object_g("fdf")}; I do not want to write a big if-else structure. How can I dynamically generate my array according to a config file and user input? A: Assuming you have the config file as some XML (or whatever format) and set up a name for each setup and the array elements with their properties: <processor-config name="first"> <object type="a"/> <object type="b"> <argument value="2"> <argument value="3"> </object> <object type="c"/> </processor-config> Have a ProcessorConfig class in java that holds all this data and that exposes a method like this: public Processor[] createProcessors() { Processor[] processors = new Processor[objectList.size()]; for (int i = 0; i < objectList.size(); i++) { processors[i] = objectList.get(i).createProcessor(); } } The objectList here is a list of some ObjectWrapper beans holding the data for an object configuration (corresponding to the XML object element): type and arguments and that also knows how to create a processor based on its state. Once you have this, you can parse the XML file and hold a map of String -> ProcessorConfig so based on the user input, you can simply write: configMap.get(userInputString).createProcessors() Of course, you should check for null and not invoke methods like in the above, but I wanted to keep it as short as possible. EDIT: this would be a lot easier if you can make use of Spring IoC container in your project and simply define these ProcessorConfig instances as beans in a map directly, not having to parse the XML yourself.
Q: User persistence and login flow I have a Java project that copies files and folders to a user's space on the cloud service using a RESTful API. The login design is getting very complicated, and I wanted advice on how to simplify/re-write it. I wrote this flow to gain an access token from the API and set the destination workspace (directory) through local files if they exist, or dialog boxes prompting the user for these values. Generally, I set up a User class that dictates login flow and creates dialog boxes, an AuthHandler that requests and holds the access token and a WorkspaceHandler that holds the list of available upload workspaces and current upload workspace. These values either come from a local file if it exists, or else from a dialog request to the user. User.java /** * Handles all credential request logic. * */ public class User { private AuthHandler auth; private WorkspaceHandler work; private static User instance = null; private String developerKey; private String developerSecret; private String ciUsername; private String ciPassword; private boolean saveOption = false; private boolean firstRun = true; private boolean change; private String workspaceId; private String workspaceName; boolean open = false; public String getName() { return ciUsername; } private User() throws ConfigurationException, SecurityException, IOException, InterruptedException { auth = AuthHandler.getHandler(); work = WorkspaceHandler.getHandler(); } public static User getUser() throws ConfigurationException, SecurityException, IOException, InterruptedException { if(instance == null) { instance = new User(); } return instance; } public void createUser() throws ConfigurationException, SecurityException, IOException, InterruptedException { //access token readCredentials(); while(!auth.login(developerKey, developerSecret, ciUsername, ciPassword)) { createCredentials(); work.deleteWorkspace(); } work.resetWorkspaceList(); firstRun = false; if(saveOption) saveToProperties(); else deleteProperties(); //workspace readWorkspace(); } private void readWorkspace() throws ConfigurationException, IOException, InterruptedException { if(!new File("config/workspace.properties").exists()) { selectWorkspace(); } else { PropertiesConfiguration credentialProps = new PropertiesConfiguration("config/workspace.properties"); workspaceId = credentialProps.getString("workspaceId"); workspaceName = credentialProps.getString("workspaceName"); work.setWorkspace(workspaceName, workspaceId); } } public void changeUser() throws SecurityException, ConfigurationException, IOException, InterruptedException { change = true; do { createCredentials(); if(!change) return; } while(!auth.login(developerKey, developerSecret, ciUsername, ciPassword)); if(saveOption) saveToProperties(); else deleteProperties(); work.deleteWorkspace(); selectWorkspace(); } private void selectWorkspace() throws ConfigurationException, IOException, InterruptedException { work.resetWorkspaceList(); work.requestWorkspace(); } private void deleteProperties() throws IOException { Files.deleteIfExists(Paths.get("config/credentials.properties")); } private void readCredentials() throws ConfigurationException, SecurityException, IOException, InterruptedException { if(!new File("config/credentials.properties").exists()) { createCredentials(); } else { PropertiesConfiguration credentialProps = new PropertiesConfiguration("config/credentials.properties"); developerKey = credentialProps.getString("key"); developerSecret = credentialProps.getString("secret"); ciUsername = credentialProps.getString("username"); ciPassword = credentialProps.getString("password"); saveOption = true; } } private void createCredentials() throws SecurityException, IOException, ConfigurationException, InterruptedException { if(open) { change = false; return; } open = true; LoginDialog login = new LoginDialog(); if (login.isOk()) { developerKey = login.getKeyField().getText(); developerSecret = login.getSecretField().getText(); ciUsername = login.getUsernameField().getText(); ciPassword = login.getPasswordField().getText(); saveOption = login.getSave().isSelected(); } // if you cancel at start, program closes else if(firstRun) { System.exit(0); } else { change = false; } open = false; } private void saveToProperties() throws ConfigurationException, SecurityException, IOException { // skip if already saved if(new File("config/credentials.properties").exists()) { return; } PropertiesConfiguration config = new PropertiesConfiguration(); config.setProperty("key", developerKey); config.setProperty("secret", developerSecret); config.setProperty("username", ciUsername); config.setProperty("password", ciPassword); config.save("config/credentials.properties"); CustomLogger.writeToLog("Credentials saved for next run."); } } AuthHandler.java /** * Handles access token creation. * */ public class AuthHandler { // authentication details private AccessToken accessToken; private static AuthHandler instance = null; // date for access token renewal private GregorianCalendar renewDate; // JSON parser private Gson gson = new Gson(); private String developerKey; private String developerSecret; private String ciUsername; private String ciPassword; public static AuthHandler getHandler() throws ConfigurationException, SecurityException, IOException, InterruptedException { if(instance == null) { instance = new AuthHandler(); } return instance; } /** * To ensure only one instance of access token. */ private AuthHandler() { } public boolean login(String key, String secret, String username, String password) throws InterruptedException { accessToken = createAccessToken(key, secret, username, password); if(accessToken != null) { setRenewDate(); return true; } return false; } /** * @return Access token for Ci requests. */ public String getAccessToken() { return accessToken.getToken(); } /** * If access token needs renewal, requests new access token. * @throws InterruptedException */ public void renewToken() throws InterruptedException { if(new GregorianCalendar().compareTo(renewDate) > 0) { System.out.print("Renewing access token: "); accessToken = createAccessToken(developerKey, developerSecret, ciUsername, ciPassword); if(accessToken != null) setRenewDate(); } } private void setRenewDate() { renewDate = new GregorianCalendar(); renewDate.add(Calendar.SECOND, accessToken.getExpiresIn()/2); } /** * Request access token from Ci server. * @param key * @param secret * @param username * @param password * @return the access token * @throws InterruptedException */ private AccessToken createAccessToken(String key, String secret, String username, String password) throws InterruptedException { this.developerKey = key; this.developerSecret = secret; this.ciUsername = username; this.ciPassword = password; HttpPost request = null; String reasonPhrase; try { String credentials = buildCredentials(username, password); HttpClient client = buildTokenClient(credentials); String postUrl = String.format("https://api.cimediacloud.com/oauth2/" + "token?grant_type=password_credentials&client_id=" + "%s&client_secret=%s", key, secret); request = new HttpPost(postUrl); // execute request System.out.print("Requesting access token: "); HttpResponse response = client.execute(request); reasonPhrase = response.getStatusLine().getReasonPhrase(); System.out.println(reasonPhrase); if(!reasonPhrase.equals("OK")) { JOptionPane.showMessageDialog(new JPanel(), "Invalid credentials, please re-enter."); System.out.println("TAST" + EntityUtils.toString(response.getEntity())); } else { CustomLogger.writeToLog("Access token granted for 24 hours."); String responseString = EntityUtils.toString(response.getEntity()); AccessToken token = gson.fromJson(responseString, AccessToken.class); return token; } } catch (IOException e) { e.printStackTrace(); } catch (SecurityException e) { e.printStackTrace(); } finally { request.releaseConnection(); } return null; } private String buildCredentials(String username, String password) { String userPass = username + ":" + password; String credentials = new String( Base64.encodeBase64(userPass.getBytes())); return credentials; } /** * Returns client with authorization details for access token request. * @param credentials * @return */ private HttpClient buildTokenClient(String credentials) { // add authorization header to request List<Header> headers = new ArrayList<Header>(); Header header = new BasicHeader("Authorization", "Basic " + credentials); headers.add(header); return HttpClients.custom().setDefaultHeaders(headers).build(); } } WorkspaceHandler.java /** * Holds current workspace for uploads and facilitates changing of upload workspace. * */ public class WorkspaceHandler { private AuthHandler auth; private Gson gson = new Gson(); //choices private List<String> workspaceList; private Workspace[] workspaces; private Workspace uploadWorkspace; //settings private String workspaceId; private String workspaceName; private static WorkspaceHandler instance; public static WorkspaceHandler getHandler() throws ConfigurationException, SecurityException, IOException, InterruptedException { if(instance == null) { instance = new WorkspaceHandler(); } return instance; } /** * Set up current workspace value and gather workspaces available for user. * @throws ConfigurationException * @throws SecurityException * @throws IOException * @throws InterruptedException */ private WorkspaceHandler() throws ConfigurationException, SecurityException, IOException, InterruptedException { auth = AuthHandler.getHandler(); } public void resetWorkspaceList() throws IOException, InterruptedException { // set up choices uploadWorkspace = null; workspaceId = workspaceName = null; workspaces = getWorkspaces().getItems(); workspaceList = workspaceToString(workspaces); } public void deleteWorkspace() throws IOException { Files.deleteIfExists(Paths.get("config/workspace.properties")); } public void requestWorkspace() throws ConfigurationException, IOException, InterruptedException { do { uploadWorkspace = selectWorkspace(workspaces, workspaceList); if(uploadWorkspace == null) { JOptionPane.showMessageDialog(null, "Please select a workspace."); } } while(uploadWorkspace == null); if(!uploadWorkspace.getId().equals(workspaceId)) { updateWorkspace(uploadWorkspace); } } private void updateWorkspace(Workspace workspace) throws ConfigurationException, SecurityException, IOException { workspaceName = uploadWorkspace.getName(); workspaceId = uploadWorkspace.getId(); saveToProperties(); CustomLogger.writeToLog(String.format("Workspace set: %s", workspaceName)); SysTray.message("Workspace changed", String.format("The next job will upload to %s.", workspaceName)); } private void saveToProperties() throws ConfigurationException { PropertiesConfiguration config = new PropertiesConfiguration(); config.setProperty("workspaceId", workspaceId); config.setProperty("workspaceName", workspaceName); config.save("config/workspace.properties"); } private Workspace selectWorkspace(Workspace[] workspaces, List<String> workspaceList) throws IOException, HeadlessException, ConfigurationException, SecurityException, InterruptedException { String message = "Hello " + User.getUser().getName() + ", \nSelect your desired upload workspace:"; WorkspaceDialog select = new WorkspaceDialog(message); JFrame frame = new JFrame(); frame.setAlwaysOnTop(true); String choice = (String)JOptionPane.showInputDialog( frame, "Hello " + User.getUser().getName() + ", \nSelect your desired upload workspace:", "Upload workspace", JOptionPane.PLAIN_MESSAGE, null, workspaceList.toArray(), workspaceName); if(choice == null) { return uploadWorkspace; } return workspaces[workspaceList.indexOf(choice)]; } private HttpClient buildAuthorizedClient() { //add authorization token to request List<Header> headers = new ArrayList<Header>(); Header header = new BasicHeader("Authorization", "Bearer " + auth.getAccessToken()); headers.add(header); return HttpClients.custom().setDefaultHeaders(headers).build(); } private WorkspaceList getWorkspaces() throws IOException, InterruptedException { HttpClient client = buildAuthorizedClient(); //set up get request String url = "https://api.cimediacloud.com/workspaces"; HttpGet request = new HttpGet(url); //execute request HttpResponse response = null; WorkspaceList workspaceList = null; try { response = client.execute(request); String reasonPhrase = response.getStatusLine().getReasonPhrase(); if(!reasonPhrase.equals("OK")) { SysTray.message("Error", "Server error occurred. Please try request again."); Thread.sleep(3000); System.exit(0); } //parse json response for information HttpEntity entity = response.getEntity(); String responseString = EntityUtils.toString(entity, "UTF-8"); workspaceList = gson.fromJson(responseString, WorkspaceList.class); return workspaceList; } finally { request.releaseConnection(); } } private List<String> workspaceToString(Workspace[] workspaces) { List<String> workspaceList = new ArrayList<String>(); for(int i = 0; i < workspaces.length; i++) { String name = workspaces[i].getName(); workspaceList.add(name); } return workspaceList; } public String getId() { return workspaceId; } public String getName() { return workspaceName; } public void changeWorkspace() { // TODO Auto-generated method stub } public void setWorkspace(String workspaceName, String workspaceId) { this.workspaceId = workspaceId; this.workspaceName = workspaceName; uploadWorkspace = new Workspace(workspaceName, workspaceId); } } Couple questions: Is the overall separation of responsibilities sound? The login flow is getting extremely polluted with boolean values that control flow, which indicates to me that I need to redesign the login process. Any pointers on how I can do this? If my code is a mess, simply give me advice on how I should structure this to best manage the values involved. A: Is the overall separation of responsibilities sound? Nope. Take for example User, the biggest offender. When I see User, the first thought that comes to my mind is a dumb object with user id, first name, last name. Your class looks nothing like that. Worse, it's a singleton! (What year is this? Back when Windows 98 was cool? ;-) Calling it UserManager would be closer to the truth, but it would still have too many responsibilities. It has a AuthHandler, and it has a WorkspaceHandler. You need to rethink this. A User should be just a container of simple attributes. A UserManager could have responsibilities like this: register a new user find an existing user check if a user exists Another clear warning sign here is the presence of fields like this: private String workspaceId; private String workspaceName; These look like implementation details of a Workspace class. It might make sense for a User to have an associated Workspace, but a User class should not have to know how a Workspace works. This is violating good encapsulation. If a User doesn't need to do something with a Workspace, then he shouldn't have a Workspace. For example if the purpose of the Workspace is to save files uploaded by this user, that logic should be outside of the User class. You could have an UploadHandler class for that, which receives a User object, it gets the associated Workspace from a WorkspaceManager, and uploads the file. Something like that. I suggest to start fresh with a clear mind. Forget about the existing classes and your existing code. Imagine the most basic objects you would need for your purpose. For each and every one of them, think of the single responsibility it has to do. Write them all down, on a blank sheet of paper. If you want to add an additional feature, behavior, check carefully if it falls within the defined single responsibility. If it doesn't then think of another class that could naturally contain that responsibility. Don't think about implementation. Think at a conceptual level what each class should do, how it should behave, how it can be used together with other classes, without thinking about the implementation. You should not think about files, databases, urls here, you should think at a higher level. The end result will be your interfaces. Not classes, interfaces. You can go ahead and start coding some interfaces. Simple container objects like a User can be a concrete class, but things like WorkspaceManager, UserManager, CredentialManager should all be interfaces, working with other interfaces or simple container objects. Finally you can move on to creating implementations of the interfaces. It may seem like a lot of tedious work up front without coding, but it will be worth it. Once your class design is clear on paper in terms of interfaces, the implementation becomes surprisingly straightforward. Give it a try! The login flow is getting extremely polluted with boolean values that control flow, which indicates to me that I need to redesign the login process. Any pointers on how I can do this? If you follow the above points faithfully, this might naturally get cleared up. One good sign of going in the right direction is when you are able to write unit tests for your functionality with dummy authentication classes. Being able to replace the real implementation with a dummy confirms that your authentication is modular enough that you can make changes to it without affecting the rest of the program. Finally, in the provided code I don't see a good reason to use singletons. After you redesign, I'm sure you will find that you don't actually need singletons, regular classes will be just one. But if you do have to use singletons for something, then use the cleaner, modern enum pattern, for example: enum UserManager { INSTANCE; // ... User createUser(String username, String password) { return new User(username, password); } }
Hotel description Boasting a Jacuzzi and outdoor tennis courts, Hotel Casa Arcas Villanova is situated in Villanova and provides modern accommodation. It also offers a playground, luggage storage and a tour desk. There are a range of facilities at the hotel that guests can enjoy, including snow activities, safe and a golf course. Wireless internet is also provided. Every comfortable room at Hotel Casa Arcas Villanova features a private bathroom, a CD player and a mini bar, plus all the necessities for an enjoyable stay. They offer a DVD player, a desk and heating.
Obama demands โ€˜up or down voteโ€™ on health reform In his most unequivocal terms yet, President Barack Obama championed the use of budget reconciliation to approve a final motion on the extensively-debated health care reform legislation stalled in Congress. โ€œSo, no matter which approach you favor, I believe the United States Congress owes the American people a final vote on health care reform,โ€ Obama said in a televised speech Wednesday. โ€œWe have debated this issue thoroughly, not just for a year, but for decades.โ€ Flanked on stage by doctors in white coats, he noted that both chambers of Congress have passed the legislation, before declaring, โ€œAnd now it deserves the same kind of up-or-down vote that was cast on welfare reform, the Childrenโ€™s Health Insurance Program, COBRA health coverage for the unemployed, and both Bush tax cuts โ€” all of which had to pass Congress with nothing more than a simple majority.โ€ Obamaโ€™s statement comes after months of hesitation over supporting the use of the procedure to bypass a likely GOP filibuster. He touted the inclusion of various Republican ideas in his bill, saying that the time for talk is over. โ€œMany Republicans in Congress just have a fundamental disagreement over whether we should have more or less oversight of insurance companies,โ€ he claimed, saying the GOP should simply โ€œvote against the proposal Iโ€™ve put forward.โ€ โ€œAnd so I ask Congress to finish its work, and I look forward to signing this reform into law. Letโ€™s get it done.โ€ The procedural tool known as budget reconciliation seems to be the Democratsโ€™ only feasible route to enacting the bill. It will allow the Senate to amend the legislation it passed in December with a 51-votes majority rather than a 60-vote supermajority, before the House of Representatives can approve one last motion. If it succeeds, the president can sign it into law. โ€œEvery argument has been made,โ€ he said. โ€œSo now is the time to make a decision about how to finally reform health care so that it works.โ€ Republicans have strongly urged Democrats to scrap the current plan and start over. Not a single GOP lawmaker in the House or Senate is expected to vote for the legislation. A Zogby International-University of Texas Health Science Center poll released this month found that 57 percent of Americans agree and want Congress to start from scratch on the issue, The Hill reported. The president defended some of the vigorous criticisms from Republicans regarding his plan, particularly the claim that it would afford the federal government greater control over individualsโ€™ medical decisions. โ€œIf you like your plan, you can keep your plan,โ€ Obama promised. โ€œIf you like your doctor, you can keep your doctor. Because I can tell you that as the father of two young girls, I wouldnโ€™t want any plan that interferes with the relationship between a family and their doctor.โ€ SPONSORED This video is from MSNBCโ€™s Andrea Mitchell Reports, broadcast March 3, 2010. Don't let Silicon Valley control what you see. Get more stories like this in your inbox, every day.
Transport properties and association behaviour of the zwitterionic drug 5-aminolevulinic acid in water. A precision conductometric study. The behavior of the hydrochloride salt of 5-aminolevulinic acid (ALA-HCl) with respect to transport properties and dissociation in aqueous solution at 25 degrees C has been studied using precision conductometry within the concentration range 0.24-5.17mM. The conductivity data are interpreted according to elaborated conductance theory. The carboxyl group appears to be, in practice, undissociated. The dissociation constant, K(a), of the NH(3)(+) form of the amino acid molecules is determined to 6.78x10(-5) (molarity scale); pK(a)=4.17. The limiting molar conductivity of the ALA-H(+) ion, lambda(0)=33.5cm(2)Omega(-1)mol(-1); electric mobility u=3.47x10(-4)cm(2)V(-1)s(-1), is close to the electric mobilites of the acetate and benzoic ions.
#!/bin/bash IFS=$'\n' STRINGS=($(awk -F= 'NF <= 1 {next} {print $1}' Stripe/Resources/Localizations/en.lproj/Localizable.strings)) EXIT_CODE=0 for f in Stripe/Resources/Localizations/*.lproj/*.strings do echo "Checking $f..." HAS_MISSING=0 for VAL in "${STRINGS[@]}" do ESCAPED_VAL=$(echo "$VAL" | sed 's/'\''/\\'"'"'/g') VAL_CHECK_COM='/usr/libexec/PlistBuddy -c "Print :$1" $2 2> /dev/null' LOCALIZED_VAL=$(/bin/bash -c "$VAL_CHECK_COM" -- "$ESCAPED_VAL" "$f") if [ -z "$LOCALIZED_VAL" ] then EXIT_CODE=1 HAS_MISSING=1 echo -e "\t\033[0;31m$ESCAPED_VAL\033[0m" fi done if [ $HAS_MISSING == 0 ] then echo -e "\t\033[0;32mAll good!\033[0m" fi done exit $EXIT_CODE
Once upon a time lamayuru was inundated in a lake, or so the legend goes. Arahat Madhyantika prophesied that one day the lake would be dried and a monastery would be made here. In the 11th century came a Buddhist saint to meditate in a nearby cave, mystical Mahasiddha Naropa, who with his prayers miraculously invoked the water to dwindle away and made the place into a sacred land. Well preserved cave is still stands and is the part of the main shrine of Lamayuru Monastery. In 1038, Rinchen Zangpo, a great translator, built five temples at Lamayuru, only one is in perfect condition today. Monasteryโ€™s impeccable building stands hitherโ€ฆ I love looking at these murals at the entrance of the monasteries. I just sit there and let them take my mind on a journey through the story, they depict. So far, i m just trying to figure out much about them reading here and there. You know the stuff. Tantric Protectorsโ€ฆ Human ant house monastery as I call it, which seems to be on the mounds of earth. To add all up just like ants, Monks live in a community and work for the greater good. They function as parts of a whole. While driving upto Lamayuru you will cross Fotu la 4,108m (13,479 ft) It is the Highest point on Srinagar-Leh Highway of the Himalayan Zanskar Range. There is a rainbow colored mountain, which i found absolutely breathtaking. Then there was a lone cloud on top of the mountain in the clear blue sky, which was adorable The valley itself is so vast that i just kept my face out the window trying to take it all in and clicking away like a crazed dog wanting for air. Mountains had so many beautiful arrays of color as if an artist poured mix of liquid acrylic color over few of them. Pure love from my sideโ€ฆ โ€œI like the mountains because they make me feel small,โ€™ Jeff says. โ€˜They help me sort out whatโ€™s important in life.โ€ โ€” Mark Obmascik Itโ€™s a modest little town in Uttarakhand on the way to Gangotri. We took a taxi from Uttarkashi to reach Harshil. By the time we reached Harshil sun was out of the reach behind the mountains and it was set beforehand for this little townโ€ฆ. The allure of the gushing Bhagirathi river through the town is harmony in the notes of the nature. Just like the keys of piano you will hear the perfect symphony among the environment, water, animals. Just like the keys of piano you will hear the perfect symphony among the environment, water, animals. How the Bhagirathi flows with gustoโ€ฆ While we were roaming and experimenting with the photos, a companion presented himself out of nowhere and followed us all the way to the next village and it was helpful in the dark to have a dog lead on and see to it that you are protected. How I would have liked to bring him back to delhi but I figured with all his fur, he could not adapt to the Delhiโ€™s summers. I named him Hachi for being loyal. We hoped for snow as it was end of December but you know global warming, so it was late as we were told at the night by the villagers. There was chilly wind so we all cramped together around the can of burning hot red coal and talked about for an hour. That clear sky. I even saw a shooting star thereโ€ฆ โ€œThe core of mansโ€™ spirit comes from new experiences.โ€ โ€” Jon Krakauer, Into the Wild.
Somdetch Phra Paramindr Maha Chulalongkorn (Thai: เธžเธฃเธฐเธšเธฒเธ—เธชเธกเน€เธ”เน‡เธˆเธžเธฃเธฐเธ›เธฃเธกเธดเธ™เธ—เธฃเธกเธซเธฒเธˆเธธเธฌเธฒเธฅเธ‡เธเธฃเธ“เนŒ เธžเธฃเธฐเธˆเธธเธฅเธˆเธญเธกเน€เธเธฅเน‰เธฒเน€เธˆเน‰เธฒเธญเธขเธนเนˆเธซเธฑเธง), or Rama V (20 September 1853 โ€“ 23 October 1910), was the fifth monarch of Siam under the House of Chakri. He was known to the Siamese of his time as Phra Phuttha Chao Luang (เธžเธฃเธฐเธžเธธเธ—เธ˜เน€เธˆเน‰เธฒเธซเธฅเธงเธ‡, the Royal Buddha). His reign was characterized by the modernization of Siam, governmental and social reforms, and territorial concessions to the British and French. As Siam was threatened by Western expansionism, Chulalongkorn, through his policies and acts, managed to save Siam from colonisation.[1] All his reforms were dedicated to ensuring Siam's survival in the face of Western colonialism, so that Chulalongkorn earned the epithet Phra Piya Maharat (เธžเธฃเธฐเธ›เธดเธขเธกเธซเธฒเธฃเธฒเธŠ, the Great Beloved King). In 1867, King Mongkut led an expedition to the Malay Peninsula south of the city of Hua Hin,[3] to verify his calculations of the solar eclipse of 18 August 1868. Both father and son fell ill of malaria. Mongkut died on 1 October 1868. Assuming the 15 year-old Chulalongkorn to be dying as well, King Mongkut on his deathbed wrote, "My brother, my son, my grandson, whoever you all the senior officials think will be able to save our country will succeed my throne, choose at your own will." Si Suriyawongse, the most powerful government official of the day, managed the succession of Chulalongkorn to the throne and his own appointment as regent. The first coronation was held on 11 November 1868. Chulalongkorn's health improved, and he was tutored in public affairs. The young Chulalongkorn was an enthusiastic reformer. He visited Singapore and Java in 1870 and British India in 1872 to study the administration of British colonies. He toured the administrative centres of Calcutta, Delhi, Bombay, and back to Calcutta in early 1872. This journey was a source of his later ideas for the modernization of Siam. He was crowned king in his own right as Rama V on 16 November 1873.[1][clarification needed] Si Suriyawongse then arranged for the Front Palace of King Pinklao (who was his uncle) to be bequeathed to King Pinklao's son, Prince Yingyot (who was Chulalongkorn's cousin). As regent, Si Suriyawongse wielded great influence. Si Suriyawongse continued the works of King Mongkut. He supervised the digging of several important khlongs, such as Padung Krungkasem and Damneun Saduak, and the paving of roads such as Chareon Krung and Silom. He was also a patron of Thai literature and performing arts. Chulalongkorn's first reform was to establish the "Auditory Office" (Th: เธซเธญเธฃเธฑเธฉเธŽเธฒเธเธฃเธžเธดเธžเธฑเธ’เธ™เนŒ), solely responsible for tax collection, to replace corrupt tax collectors. As tax collectors had been under the aegis of various nobles and thus a source of their wealth, this reform caused great consternation among the nobility, especially the Front Palace. From the time of King Mongkut, the Front Palace had been the equivalent of a "second king", with one-third of national revenue allocated to it. Prince Yingyot of the Front Palace was known to be on friendly terms with many Britons, at a time when the British Empire was considered the enemy of Siam. In 1874, Chulalongkorn established the Council of State as a legislative body and a privy council as his personal advisory board based on the British privy council. Council members were appointed by the monarch. On the night of 28 December 1874, a fire broke out near the gunpowder storehouse and gasworks in the main palace. Front Palace troops quickly arrived, fully armed, "to assist in putting out the fire". They were denied entrance and the fire was extinguished.[5]:193 The incident demonstrated the considerable power wielded by aristocrats and royal relatives, leaving the king little power. Reducing the power held by the nobility became one of his main motives in reforming Siam's feudal politics. When Prince Yingyot died in 1885, Chulalongkorn took the opportunity to abolish the titular Front Palace and created the title of "Crown Prince of Siam" in line with Western custom. Chulalongkorn's son, Prince Vajirunhis, was appointed the first Crown Prince of Siam, though he never reigned. In 1895, when the prince died of typhoid at age 16, he was succeeded by his half-brother Vajiravudh, who was then at boarding school in England. In the northern Laotian lands bordering China, the insurgents of the Taiping Rebellion had taken refuge since the reign of King Mongkut. These Chinese were called Haw and became bandits, pillaging the villages. In 1875, Chulalongkorn sent troops from Bangkok to crush the Haw who had ravaged as far as Vientiane. However, they met strong Chinese resistance and retreated to Isan in 1885. New, modernized forces were sent again and were divided into two groups approaching the Haw from Chiang Kam and Pichai. The Haw scattered and some fled to Vietnam. The Siamese armies proceeded to eliminate the remaining Haw. The city of Nong Khai maintains memorials for the Siamese dead. In Burma, while the British Army fought the Burmese Konbaung Dynasty, Siam remained neutral. Britain had agreements with the Bangkok government, which stated that if the British were in conflict with Burma, Siam would send food supplies to the British Army. Chulalongkorn honored the agreement. The British thought that he would send an army to help defeat the Burmese, but he did not do so. Prince Devawongse Varopakarn (Foreign Minister), King Chulalongkorn and Prince Damrong Rajanubhab (Interior Minister). During his reign the king employed his brothers and sons in the government, ensuring royal monopoly on power and administration. Freed of the Front Palace and Chinese rebellions, Chulalongkorn initiated reforms. He established the Royal Military Academy in 1887 to train officers in Western fashion. His upgraded forces provided the king much more power to centralize the country. The government of Siam had remained largely unchanged since the 15th century. The central government was headed by the Samuha Nayok (i.e., prime minister), who controlled the northern parts of Siam, and the Samuha Kalahom (i.e., grand commander), who controlled southern Siam in both civil and military affairs. The Samuha Nayok presided over the Chatu Sadombh (i.e., Four Pillars). The responsibilities of each pillar overlapped and were ambiguous. In 1888, Chulalongkorn moved to institute a government of ministries. Ministers were, at the outset, members of the royal family. Ministries were established in 1892, with all ministries having equal status. The Council of State proved unable to veto legal drafts or to give Chulalongkorn advice because the members regarded Chulalongkorn as an absolute monarch, far above their station. Chulalongkorn dissolved the council altogether and transferred advisory duties to the cabinet in 1894. Chulalongkorn abolished the traditional Nakorn Bala methods of torture in the judiciary process, which were seen as inhumane and barbaric to Western eyes, and introduced a Western judicial code. His Belgian advisor, Rolin-Jaequemyns, played a great role in the development of modern Siamese law and its judicial system. King Chulalongkorn with a few of his sons at Eton College in the United Kingdom in 1907. Chulalongkorn was the first Siamese king to send royal princes to Europe to be educated. In 19th century Europe, nationalism flourished and there were calls for more liberty. The princes were influenced by the liberal notions of democracy and elections they encountered in republics like France and constitutional monarchies like the United Kingdom. In 1884 (year 103 of the Rattakosin Era), Siamese officials in London and Paris warned Chulalongkorn of threats from European colonialism. They advised that Siam should be reformed like Meiji Japan and that Siam should become a constitutional monarchy. Chulalongkorn demurred, stating that the time was not ripe and that he himself was making reforms. Throughout Chulalongkorn's reign, writers with radical ideas had their works published for the first time. The most notable ones included Thianwan Wannapho, who had been imprisoned for 17 years and from prison produced many works criticizing traditional Siamese society. In 1863, King Norodom of Cambodia was forced to put his country under the French protectorate. The cession of Cambodia was officially formulated in 1867. However, Inner Cambodia (as called in Siam) consisting of Battambang, Siem Reap, and Srisopon, remained a Siamese possession. This was the first of many territorial cessions. In 1887, French Indochina was formed from Vietnam and Cambodia. In 1888, French troops invaded northern Laos to subjugate the Heo insurgents. However, the French troops never left, and the French demanded more Laotian lands. In 1893 Auguste Pavie, the French vice-consul of Luang Prabang, requested the cession of all Laotian lands east of the Mekong River. Siam resented the demand, leading to the Francoโ€“Siamese War of 1893. The French gunboat Le Lutin entered the Chao Phraya and anchored near the French consulate ready to attack. Fighting was observed in Laos. Inconstant and Comete were attacked in Chao Phraya, and the French sent an ultimatum: an indemnity of three million francs, as well as the cession of and withdrawal from Laos. Siam did not accept the ultimatum. French troops then blockaded the Gulf of Siam and occupied Chantaburi and Trat. Chulalongkorn sent Rolin-Jacquemyns to negotiate. The issue was eventually settled with the cession of Laos in 1893, but the French troops in Chantaburi and Trat refused to leave. The cession of vast Laotian lands had a major impact on Chulalongkorn's spirit. Prince Vajirunhis died in 1894. Prince Vajiravudh was created crown prince to replace him. Chulalongkorn realised the importance of maintaining the navy and established the Royal Thai Naval Academy in 1898. Despite Siamese concessions, French armies continued the occupation of Chantaburi and Trat for another 10 years. An agreement was reached in 1903 that French troops would leave Chantaburi but hold the coast land from Trat to Koh Kong. In 1906, the final agreement was reached. Trat was returned to Siam but the French kept Koh Kong and received Inner Cambodia. Seeing the seriousness of foreign affairs, Chulalongkorn visited Europe in 1897. He was the first Siamese monarch to do so, and he desired European recognition of Siam as a fully independent power. He appointed his queen, Saovabha, as regent in Siam during his travel to Europe. Siam had been composed of a network of cities according to the Mandala system codified by King Trailokanat in 1454, with local rulers owing tribute to Bangkok. Each city retained a substantial degree of autonomy, as Siam was not a "state" but a "network" of city-states. With the rise of European colonialism, the Western concept of state and territorial division was introduced. It had to define explicitly which lands were "Siamese" and which lands were "foreign". The conflict with the French in 1893 was an example. Sukhaphiban (เธชเธธเธ‚เธฒเธ เธดเธšเธฒเธฅ) sanitary districts were the first sub-autonomous entities established in Thailand. The first such was created in Bangkok, by royal decree of King Chulalongkorn in 1897. During his European tour earlier that year, he had learned about the sanitary districts of England, and wanted to try out this local administrative unit in his capital. With his experiences during the travel to British colonies and the suggestion of Prince Damrong, Chulalongkorn established the hierarchical system of monthons in 1897, composed of province, city, amphoe, tambon, and muban (village) in descending order. (Though an entire monthon, the Eastern Province, Inner Cambodia, was ceded to the French in 1906). Each monthon was overseen by an intendant of the Ministry of Interior. This had a major impact, as it ended the power of all local dynasties. Central authority now spread all over the country through the administration of intendants. For example, the Lanna states in the north (including the Kingdom of Chiangmai, Principalities of Lampang, Lamphun, Nan, and Prae, tributaries to Bangkok) were made into two monthons, neglecting the existence of the Lanna kings. Local rulers did not cede power willingly. Three rebellions sprang up in 1901: the Ngeaw rebellion in Phrae, the 1901โ€“1902 Holy Man's Rebellion[6] in Isan, and the Rebellion of Seven Sultans in the south. All these rebellions were crushed in 1902 with the city rulers stripped of their power and imprisoned.[6] Ayutthaya King Ramathibodi II established a system of corvรฉe in 1518 after which the lives of Siamese commoners and slaves were closely regulated by the government. All Siamese common men (phraiเน„เธžเธฃเนˆ) were subject to the Siamese corvรฉe system. Each man at the time of his majority had to register with a government bureau, department, or leading member of the royalty called krom (เธเธฃเธก) as a Phrai Luang (เน„เธžเธฃเนˆเธซเธฅเธงเธ‡) or under a nobleman's master (Moon Nai or Chao Khun Moon Naiเธกเธนเธฅเธ™เธฒเธข เธซเธฃเธทเธญเน€เธˆเน‰เธฒเธ‚เธธเธ™เธกเธนเธฅเธ™เธฒเธข) as a Phrai Som (เน„เธžเธฃเนˆเธชเธก). Phrai owed service to sovereign or master for three months of the year. Phrai Suay (เน„เธžเธฃเนˆเธชเนˆเธงเธข) were those who could make payment in kine (cattle) in lieu of service. Those conscripted into military service were called Phrai Tahan (เน„เธžเธฃเนˆเธ—เธซเธฒเธฃ). Chulalongkorn was best known for his abolition of Siamese slavery (เธ—เธฒเธช.) He associated the abolition of slavery in the United States with the bloodshed of the American Civil War. Chulalongkorn, to prevent such a bloodbath in Siam, provided several steps towards the abolition of slavery, not an extreme turning point from servitude to total freedom. Those who found themselves unable to live on their own sold themselves into slavery by rich noblemen. Likewise, when a debt was defaulted, the borrower would become a slave of the lender. If the debt was redeemed, the slave regained freedom. However, those whose parents were household slaves (เธ—เธฒเธชเนƒเธ™เน€เธฃเธทเธญเธ™เน€เธšเธตเน‰เธข) were bound to be slaves forever because their redemption price was extremely high. Because of economic conditions, people sold themselves into slavery in great numbers and in turn they produced a large number of household slaves. In 1867 they accounted for one-third of Siamese population. In 1874, Chulalongkorn enacted a law that lowered the redemption price of household slaves born in 1867 (his ascension year) and freed all of them when they had reached 21. The newly freed slaves would have time to settle themselves as farmers or merchants so they would not become unemployed. In 1905, the Slave Abolition Act ended Siamese slavery in all forms. The reverse of 100 baht banknotes in circulation since the 2005 centennial depict Chulalongkorn in navy uniform abolishing the slave tradition. The traditional corvรฉe system declined after the Bowring Treaty, which gave rise to a new class of employed labourers not regulated by the government, while many noblemen continued to hold sway over large numbers of Phrai Som. Chulalongkorn needed more effective control of manpower to undo the power of nobility. After the establishment of the monthon system, Chulalongkorn instituted a census to count all men available to the government. The Employment Act of 1900 required that all workers be paid, not forced to work. Chulalongkorn had established a defence ministry in 1887. The ending of the corvรฉe system necessitated the beginning of military conscription, thus the Conscription Act of 1905 in Siam. This was followed in 1907 by the first act providing for invoking martial law, which seven years later was changed to its modern form by his son and successor, King Vajiravudh.[7] In 1873, the Royal Siamese Government Gazette published an announcement on the abolition of prostration. In it, King Chulalongkorn declared, "The practice of prostration in Siam is severely oppressive. The subordinates have been forced to prostrate in order to elevate the dignity of the phu yai. I do not see how the practice of prostration will render any benefit to Siam. The subordinates find the performance of prostration a harsh physical practice. They have to go down on their knees for a long time until their business with the phu yai ends. They will then be allowed to stand up and retreat. This kind of practice is the source of oppression. Therefore, I want to abolish it." The Gazette directed that, "From now on, Siamese are permitted to stand up before the dignitaries. To display an act of respect, the Siamese may take a bow instead. Taking a bow will be regarded as a new form of paying respect."[9] Siamese authorities had exercised substantial control over Malay sultanates since Ayutthaya times. The sultans sought British support as a counterweight to Siamese influence. In 1909, the Anglo-Siamese Treaty of 1909 was agreed. Four sultanates (Kedah, Kelantan, Terengganu and Perlis) were brought under British influence in exchange for Siamese legal rights and a loan to construct railways in southern Siam. The royal Equestrian statue of King Chulalongkorn was finished in 1908 to celebrate the 40th anniversary of the king's reign. It was cast in bronze by a Parisian metallurgist. Chulalongkorn had visited Europe two times, in 1897 and 1907, the latter visit to cure his kidney disease. His last accomplishment was the establishment of a plumbing system in 1908[citation needed]. He died on 23 October 1910 of his kidney disease at the Amphorn Sathan Residential Hall in the Dusit Palace, and was succeeded by his son Vajiravudh (King Rama VI). Chulalongkorn University, founded in 1917 as the first university in Thailand, was named in his honour. On the campus stand the statues of Rama V and his son, Rama VI. In 1997 a memorial pavilion was raised in honour of King Chulalongkorn in Ragunda, Sweden. This was done to commemorate King Chulalongkorn's visit to Sweden in 1897 when he also visited the World's Fair in Brussels.[12] During the time when Swedish-Norwegian king Oscar II travelled to Norway for a council, Chulalongkorn went up north to study forestry. Beginning in Hรคrnรถsand and travelling via Sollefteรฅ and Ragunda he mounted a boat in the small village of Utanede in order to take him back through Sundsvall to Stockholm.[13] His passage through Utanede left a mark on the village as one street was named after the king. The pavilion is erected next to that road. The old 100 baht banknote of Series 14, circulated from 1994 to 2004, bears the statues of Rama V and Rama VI on its reverse. In 2005, the 100 baht banknote was revised to depict King Chulalongkorn in naval uniform and, in the background, abolishing slavery.[14] The 1,000 baht banknote of Series 16, issued in 2015, depicts the King Chulalongkorn monument, Ananda Samakhom Throne Hall, and the abolition of slavery.[15] ^ abYourDictionary, n.d. (23 November 2011). "Chulalongkorn". Biography. YourDictionary. Archived from the original on 1 December 2011. Retrieved 1 December 2011. When Thailand was seriously threatened by Western colonialism, his diplomatic policies averted colonial domination and his domestic reforms brought about the modernization of his kingdom. ^Wyatt, David K. (1982). Thailand: A Short History. New Haven and London: Yale University Press. ISBN0-300-03054-1. ^ abMurdoch, John B. (1974). "The 1901โ€“1902 Holy Man's Rebellion"(PDF). Journal of the Siam Society. Siam Heritage Trust. JSS Vol.62.1 (digital). Retrieved April 2, 2013. The background to the rebellion must be sought in the factors that led up to the situation in the Lower Mekong at the turn of the century. Prior to the late nineteenth century reforms of King Chulalongkorn, the territory of the Siamese Kingdom was divided into three administrative categories. First were the inner provinces which were in four classes depending on their distance from Bangkok or the importance of their local ruling houses. Second were the outer provinces, which were situated between the inner provinces and further distant tributary states. Finally there were the tributary states which were on the periphery.... ^Pakorn Nilprapunt (2006). "Martial Law, B.E. 2457 (1914) โ€“ unofficial translation"(PDF). thailawforum.com. Office of the Council of State. Retrieved May 21, 2014. Reference to Thai legislation in any jurisdiction shall be to the Thai version only. This translation has been made so as to establish correct understanding about this Act to the foreigners. 1. Chulalongkorn University โ€“ Chulalongkorn University, abbreviated either CU or Chula, is a public and autonomous research university in Bangkok, Thailand. It is ranked as Thailands No.1 university from various organizers such as QS world university ranking, Round university ranking, the university was established in 1917, making it the oldest institute of higher education in Thailand. The university began taking shape during King Chulalongkorns reign when he founded the Royal Pages School in 1902 at The Grand Palace of Thailand, during the reign of his son, King Vajiravudh, the Royal Pages School became the Civil Service College of King Chulalongkorn. The Rockefeller Foundation was instrumental in helping the college from its academic foundation, on March 26,1917, King Vajiravudh renamed the college to Chulalongkorn University. Chulalongkorn University is a comprehensive and research-intensive university, according to QS university ranking 2016. CU is placed 252nd in the world, 45th in Asia, 1st in Thailand, Chulalongkorn University is one of the National Research Universities and supported by the Office of Nation Education Standards and Quality Assessment of Thailand. Moreover, CU is the only Thai university which is a member of Association of Pacific Rim Universities, admission to CU is highly selective, and applicants are required to have high test scores. Applicants ranking within the top 10 national scores is chosen for study at CU, CU comprises nineteen faculties, a School of Agriculture, three colleges, ten institutes and two other schools. Its campus occupies a vast area in downtown Bangkok, in 1899, the king founded the Civil Service Training School near the northern gate of the Royal Palace. Those who graduated from the school would become royal pages, being royal pages, they learned how to administer organizations by working closely with the king, which was a traditional way of entrance to Siamese bureaucracy. After being royal pages, they would serve in the Mahattai Ministry or other government ministries. On April 1,1902 the king thought the name of the school should be renamed to increase the dignity of students, Hence the name Royal Pages School was later used. The academic demands, however, increased all over the country as a whole as Siam was modernized, King Vajiravudh then took the remaining funds of Chulalongkorns Royal Equestrian Statue construction, which was collected from people. Then combined with his own funds to subsidize the construction of a university campus, the king organized various schools around the city proper into the Chulalongkorns College. The university firstly granted certificates to its graduates, the education for a degree was then prepared. The Rockefeller Foundation reorganized the curriculum of the Faculty of Medicine, in 1923, the Faculty of Medicine became the first faculty to accept students from the secondary education. The remaining faculties then followed suit, in 1938, the Chulalongkorn Universitys Preparatory School was founded to provide pre-collegiate education for its students. Those who managed to enter the university had to spend two years in the Preparatory School before going on to the Faculty of Arts and Sciences, the Preparatory School, however, ceased to be a university-owned preparatory school in 1947 and became independent Triam Udom Suksa School 2. Monarchy of Thailand โ€“ The monarchy of Thailand refers to the constitutional monarchy and monarch of the Kingdom of Thailand. The King of Thailand is the head of state and head of the ruling Royal House of Chakri, the institution was transformed into a constitutional monarchy in 1932 after the bloodless Siamese Revolution of 1932. The King of Thailands titles include Head of State, Head of the Royal Thai Armed Forces, Adherent of Buddhism, the current concept of Thai kingship evolved through 800 years of absolute rule. The first king of a unified Thailand was the founder of the Kingdom of Sukhothai, King Sri Indraditya, the idea of this early kingship is said to be based on two concepts derived from Hinduism and Theravada Buddhist beliefs. The first concept is based on the Vedic-Hindu caste of Kshatriya, or warrior-ruler, the second is based on the Theravada Buddhist concept of Dhammaraja, Buddhism having been introduced to Thailand around the 6th century CE. The idea of the Dhammaraja, is that the king should rule his people in accordance with Dharma and these ideas were briefly replaced in 1279, when King Ramkhamhaeng came to the throne. Ramkhamhaeng departed from tradition and created instead a concept of paternal rule and this idea is reinforced in the title and name of the king, as he is still known today, Pho Khun Ramkhamhaeng meaning Father Ruler Ramkhamhaeng. By the end of the kingdom, the two old concepts returned as symbolized by the change in the style of the kings, Pho was changed to Phaya or Lord. The Kingdom of Sukhothai was supplanted by the Kingdom of Ayutthaya, during the Ayutthayan period, the idea of kingship changed. Due to ancient Khmer tradition in the region, the Hindu concept of kingship was applied to the status of the leader, Brahmins took charge in the royal coronation. The king was treated as a reincarnation of Hindu gods, Ayutthaya historical documents show the official titles of the kings in great variation, Indra, Shiva and Vishnu, or Rama. Seemingly, Rama was the most popular, as in Ramathibodhi, however, Buddhist influence was also evident, as many times the kings title and unofficial name Dhammaraja, an abbreviation of the Buddhist Dharmaraja. The two former concepts were re-established, with a third, older concept taking hold and this concept was called Devaraja, which was an idea borrowed by the Khmer Empire from the Hindu-Buddhist kingdoms of Java, especially the idea of a scholar class based on Hindu Brahmins. The king, portrayed by state interests as a figure, then becameโ€”through a rigid cultural implementationโ€”an object of worship. From then on the monarchy was largely removed from the people, living in palaces designed after Mount Meru, the kings turned themselves into a Chakravartin, where the king became an absolute and universal lord of his realm. Kings demanded that the universe be envisioned as resolving around them, for four centuries these kings ruled Ayutthaya, presiding over some of the greatest period of cultural, economic, and military growth in Thai History. Whereas feudalism developed in the European Middle Ages, Ayutthayan King Trailokanat instituted Sakdina and this comported with the names of two kingdoms further north, Lanna Million Fields and Sip Song Phan Na Twelve Thousand Fields. Rachasap is required by court etiquette as an honorific register consisting of a special vocabulary used exclusively for addressing the king, the king was chief administrator, chief legislator, and chief judge, with all laws, orders, verdict and punishments theoretically originating from his person 3. Mongkut โ€“ During his reign, the pressure of Western expansionism was felt for the first time in Siam. Mongkut embraced Western innovations and initiated the modernization of his country, Mongkut was also known for his appointing his brother, Prince Chutamani, as Second King, crowned in 1851 as King Pinklao. Mongkut himself assured the country that Pinklao should be respected with equal honor to himself, Mongkuts reign was also the time when the power of the House of Bunnag reached its zenith and became the most powerful noble family of Siam. Mongkut was the son of Prince Isarasundhorn, son of Phutthayotfa Chulalok. Mongkut was born in the Old Palace in 1804, where the first son had died shortly after birth in 1801 and he was followed by Prince Chutamani in 1808. In 1809, Prince Isarasundhorn was crowned as Buddha Loetla Nabhalai The royal family moved to the Grand Palace. Thenceforth until their own accessions as kings, the brothers were called Chao Fa Yai, in 1824, Mongkut became a Buddhist monk, following a Siamese tradition that men aged 20 should become monks for a time. The same year, his father died, by tradition, Mongkut should have been crowned the next king, but the nobility instead chose the influential Prince Jessadabodindra, son of a royal concubine rather than a queen. Perceiving the throne was irredeemable and to political intrigues, Mongkut retained his monastic status. Vajirayan became one of the members of the family who devoted their life to religion. He travelled around the country as a monk and saw the relaxation of the rules of Pali Canon among the Siamese monks he met, in 1829, at Phetchaburi, he met a monk named Buddhawangso, who strictly followed the canon. Vajirayan admired Buddhawangso for his obedience to the canon, and was inspired to pursue religious reforms, in 1833, he began a reform movement reinforcing the canon law that evolved into the Dhammayuttika Nikaya, or Thammayut sect. In 1836, Vajirayan arrived at Wat Bowonniwet in what is now Bangkoks central district, but was then the city proper, during this time, he discovered Western knowledge, studying Latin, English, and astronomy with missionaries and sailors. Vicar Pallegoix of the Roman Catholic Archdiocese of Bangkok lived nearby, Vajirayan admired Christian morals and achievements as presented by the vicar, but could make nothing of Christian doctrine. It was then he made the comment later attributed to him as king, What you teach people to do is admirable, but what you teach them to believe is foolish. King Mongkut would later be noted for his excellent command of English, although it is said that his brother, Vice-King Pinklao. Chulalongkorn also persuaded his fathers 47th child, Vajiraรฑana, to enter the order, accounts vary about Nangklaos intentions regarding the succession. It is recorded that Nangklao verbally dismissed the royal princes from succession for various reasons, Prince Mongkut was dismissed for encouraging monks to dress in the Mon style 4. Vajiravudh โ€“ King Vajiravudh is known for his efforts to create and promote Siamese nationalism. His reign was characterized by Siams movement further towards democracy and minimal participation in World War I, Prince Vajiravudh was born on 1 January 1880 to Chulalongkorn and one of his four queens, Saovabha. In 1888, upon coming of age, Vajiravudh received the title Krom Khun Thep Dvaravati, Prince Vajiravudh was first educated in the royal palace in Siamese and English. In 1895, his half-brother Crown Prince Vajirunhis died and Vajiravudh was appointed the new Crown Prince of Siam and he continued his education in Britain, at the Royal Military College, Sandhurst in 1898 and was commissioned briefly in the Durham Light Infantry upon graduation. He studied law and history at Christ Church, Oxford in 1899, however, he suffered from appendicitis that barred him from graduating in 1901. On behalf of his father, King Chulalongkorn, he attended the coronation of King Edward VII on 9 August 1902, Crown Prince Vajiravudh returned to Siam in 1902 and in 1904 became a temporary monk, in accordance with Siamese tradition. In 1906, his father Chulalongkorn travelled to Europe to seek treatment for his lung disease, one of Crown Prince Vajiravudhs accomplishments during this regency was his supervision of the construction of the equestrian statue of King Chulalongkorn. Chulalongkorn died on 23 October 1910, and Vajiravudh succeeded his father as king of Siam, even before his coronation, Vajiravudh initiated several reforms. He organized Siams defence and established military academies and he created the rank of general for the first time in Siam, with his uncle, Prince Bhanurangsi Savangwongse as the first Siamese general. His first act following his accession to the throne was to build the Royal Pages College and it was built as an all-boy boarding school in the same tradition as English public schools such as Eton and Harrow. The school was instead of a royal monastery, formerly a custom of Thai kings. Later he also raised the Civil Servant School to Chulalongkorn Academy for Civil Officials, both Vajiravudh College and Chulalongkorn University still benefit from the funds that King Vajiravudh set aside for the use of the two elite institutions. He also improved Siamese healthcare systems and set up some of the earliest public hospitals in Siam, Vajira Hospital in 1912, in 1911, he established the Boy Scouts in Siam, disbanded in the latter part of his reign. On 11 November 1911, Vajiravudhs coronation was held with visiting royals from Europe and Japan as guests, later that year, the first airplane was flown in Siam. The early years of Vajiravudhs administration were dominated by his two uncles, Prince Damrong and Prince Devawongse, both of them Chulalongkorns right hand men. However, the king disagreed with Prince Damrong, Minister of Interior, Vajiravudh reformed his fathers monthon system by imposing the paks or regions over the administrative monthons. Each pak was governed by an Uparaja directly responsible to the king, the Uparaja presided over the intendants of monthons in the regionโ€”thus concentrating local administrative powers in his handsโ€”much to the dismay of Prince Damrong. Radicals expected a new constitution upon the coronation of Vajiravudh, in 1911, the Wuchang Uprising that led to the fall of Qing dynasty prompted Siamese radicals to act 5. Wichaichan โ€“ Krom Phra Ratchawang Bowon Wichaichan or Phra Ong Chao Yodyingyot was a Siamese Prince and member of the Chakri Dynasty. He was the eldest son of Vice King Pinklao and Princess Aim, Wichaichan succeeded his father by being appointed the Front Palace and Vice King of Siam in 1868, during the reign of his cousin King Chulalongkorn. During his tenure the office of Front Palace was extremely powerful, inevitably the two forces clashed in the Front Palace crisis. Wichaichan was defeated and the power of the Front Palace was greatly diminished, after his death in 1885, the last vestiges of the title were abolished in favour of a Crown Prince. Phra Ong Chao Yodying Prayurayot Bovorn Rachorod Rattana Rachakumarn was born on the 6 April 1838 and it was said that his father gave him an English name in honour of his personal hero, the first President of the United States, George Washington. Therefore, he is referred to as Prince George Washington or Prince George. In May 1851 Prince Yodyingyots father was elevated as Second King Pinklao or the Front Palace by his older brother King Mongkut, Pinklao also received from his brother all the styles, titles and honour of a monarch, despite never having been crowned himself. During his childhood the Prince received an education, including the English language. It was said that he became an extremely skillful engineer, after King Pinklaos death in 1866, King Mongkut decided not to appoint another Front Palace due to the fact that his own son Prince Chulalongkorn was only 12 years old. This meant that the position which was also that of the heir presumptive was left unoccupied, fearing instability, Chao Phraya Si Suriyawongse the Kalahom tried to persuade the King to appoint Prince Yodyingyot to succeed King Pinklao. Si Suriyawongse was a member of the powerful Bunnag family, which had dominated the running of the Siamese government since the reign of King Buddha Loetla Nabhalai. The King refused to appoint Yodyingyot, instead he elevated the Prince to Krom Muen Bowon Wichaichan or Prince Bowon Wichaichan in 1867 and this meant Wichaichan was only made a Prince of the Front Palace but not the actual title of Front Palace. Since 1865 the Prince was also the commander of the Front Palaces naval forces, Wichaichan was a great friend of the British Consul-General to Siam, Thomas George Knox, he was originally recruited by Pinklao to modernize the Front Palaces armed forces. Knox greatly preferred the mature and experienced Wichaichan โ€” who was also the son of one of the most westernized member of the elite to ascend the throne โ€” over the young Chulalongkorn. In August 1868 King Mongkut contracted malaria whilst on an expedition to see an eclipse in Prachuap Khiri Khan province. The young Chulalongkorn was unanimously declared King by a council of high-ranking nobility, princes of the Chakri Dynasty, the council was presided by Si Suriyawongse who was also appointed Regent for the young King. During the meeting one of the Princes nominated Wichaichan as the next Front Palace. The most notable objection of this came from Prince Vorachak Tharanubhab 6. Bangkok โ€“ Bangkok is the capital and most populous city of Thailand. It is known in Thai as Krung Thep Maha Nakhon or simply Krung Thep. The city occupies 1,568.7 square kilometres in the Chao Phraya River delta in Central Thailand, over 14 million people live within the surrounding Bangkok Metropolitan Region, making Bangkok an extreme primate city, significantly dwarfing Thailands other urban centres in terms of importance. Bangkok was at the heart of the modernization of Siamโ€”later renamed Thailandโ€”during the late 19th century, the city grew rapidly during the 1960s through the 1980s and now exerts a significant impact on Thailands politics, economy, education, media and modern society. The Asian investment boom in the 1980s and 1990s led many multinational corporations to locate their headquarters in Bangkok. The city is now a regional force in finance and business. It is a hub for transport and health care, and has emerged as a regional centre for the arts, fashion. The city is known for its vibrant street life and cultural landmarks. The historic Grand Palace and Buddhist temples including Wat Arun and Wat Pho stand in contrast with other tourist attractions such as the scenes of Khaosan Road. Bangkok is among the top tourist destinations. It is named the most visited city in MasterCards Global Destination Cities Index, Bangkoks rapid growth amidst little urban planning and regulation has resulted in a haphazard cityscape and inadequate infrastructure systems. The city has turned to public transport in an attempt to solve this major problem. Five rapid transit lines are now in operation, with more systems under construction or planned by the national government and the Bangkok Metropolitan Administration. The history of Bangkok dates at least back to the early 15th century, because of its strategic location near the mouth of the river, the town gradually increased in importance. Bangkok initially served as a customs outpost with forts on both sides of the river, and became the site of a siege in 1688 in which the French were expelled from Siam. After the fall of Ayutthaya to the Burmese Empire in 1767, the newly declared King Taksin established his capital at the town, in 1782, King Phutthayotfa Chulalok succeeded Taksin, moved the capital to the eastern banks Rattanakosin Island, thus founding the Rattanakosin Kingdom. The City Pillar was erected on 21 April, which is regarded as the date of foundation of the present city, Bangkoks economy gradually expanded through busy international trade, first with China, then with Western merchants returning in the early-to-mid 19th century. As the capital, Bangkok was the centre of Siams modernization as it faced pressure from Western powers in the late 19th century, Bangkok became the centre stage for power struggles between the military and political elite as the country abolished absolute monarchy in 1932 7. Thailand โ€“ Thailand, officially the Kingdom of Thailand, formerly known as Siam, is a country at the centre of the Indochinese peninsula in Southeast Asia. With a total area of approximately 513,000 km2, Thailand is the worlds 51st-largest country and it is the 20th-most-populous country in the world, with around 66 million people. The capital and largest city is Bangkok, Thailand is a constitutional monarchy and has switched between parliamentary democracy and military junta for decades, the latest coup being in May 2014 by the National Council for Peace and Order. Its capital and most populous city is Bangkok and its maritime boundaries include Vietnam in the Gulf of Thailand to the southeast, and Indonesia and India on the Andaman Sea to the southwest. The Thai economy is the worlds 20th largest by GDP at PPP and it became a newly industrialised country and a major exporter in the 1990s. Manufacturing, agriculture, and tourism are leading sectors of the economy and it is considered a middle power in the region and around the world. The country has always been called Mueang Thai by its citizens, by outsiders prior to 1949, it was usually known by the exonym Siam. The word Siam has been identified with the Sanskrit ลšyฤma, the names Shan and A-hom seem to be variants of the same word. The word ลšyรขma is possibly not its origin, but a learned, another theory is the name derives from Chinese, Ayutthaya emerged as a dominant centre in the late fourteenth century. The Chinese called this region Xian, which the Portuguese converted into Siam, the signature of King Mongkut reads SPPM Mongkut King of the Siamese, giving the name Siam official status until 24 June 1939 when it was changed to Thailand. Thailand was renamed Siam from 1945 to 11 May 1949, after which it reverted to Thailand. According to George Cล“dรจs, the word Thai means free man in the Thai language, ratcha Anachak Thai means kingdom of Thailand or kingdom of Thai. Etymologically, its components are, ratcha, -ana- -chak, the Thai National Anthem, written by Luang Saranupraphan during the extremely patriotic 1930s, refers to the Thai nation as, prathet Thai. The first line of the anthem is, prathet thai ruam lueat nuea chat chuea thai, Thailand is the unity of Thai flesh. There is evidence of habitation in Thailand that has been dated at 40,000 years before the present. Similar to other regions in Southeast Asia, Thailand was heavily influenced by the culture and religions of India, Thailand in its earliest days was under the rule of the Khmer Empire, which had strong Hindu roots, and the influence among Thais remains even today. Voretzsch believes that Buddhism must have been flowing into Siam from India in the time of the Indian Emperor Ashoka of the Maurya Empire, later Thailand was influenced by the south Indian Pallava dynasty and north Indian Gupta Empire. The Menam Basin was originally populated by the Mons, and the location of Dvaravati in the 7th century, the History of the Yuan mentions an embassy from the kingdom of Sukhothai in 1282 8. Dusit Palace โ€“ Dusit Palace is a compound of royal residences in Bangkok, Thailand. Constructed over an area north of Rattanakosin Island between 1897 and 1901 by King Chulalongkorn. The palace covers an area of over 64,749 square metres and is dotted between gardens and lawns with 13 different royal residences. Dusit Palace is surrounded by Ratchwithi Road in the north, Sri Ayutthaya Road in the south, Rachasima Road in the west and U-Thong Nai Road on the east. Since 1782 and the foundation of Bangkok as the city of the Kingdom of Siam. The palace became the point of the city as well as a seat of the royal government. These changes were brought about as a means to modernize the palace as well as accommodate its growing population, as a result, the palace, particularly the Inner Court, became extremely overcrowded. The Grand Palace also became hot during the summer months. Epidemics once started, were liable to spread easily within its crowded compound, the king, who enjoyed taking long walks for exercise and pleasure, often felt unwell after prolonged stays inside the Grand Palace. Consequently, he took frequent trips into the country to take relief of this condition, Chulalongkorn got the idea of having a royal residence with spacious gardens on the outskirts of the capital from European monarchs during his trip to Europe in 1897. When he returned to Bangkok he began to build a new compound within walking distance of the Grand Palace. He began by acquiring several connect farmlands and orchards between Padung Krung Kasem and Samsen canals from funds of his Privy Purse, the king decided to name this area Suan Dusit meaning Celestial Garden. The first building within this area was a single story structure, used by the king, his consorts. In 1890 plans for a permanent set of residences are drawn up and constructions were begun under the supervision of Prince Narisara Nuvadtivongs, apart from the Prince all other members of the team were Europeans. Apart from taking his long walks, Chulalongkorn also indulge in a new, even before he took permanent residence at Dusit Palace, he would take his entourage cycling from the Grand Palace to the garden and back. With bicycling trips often taking up all day and this pathway connecting the Grand Palace to Dusit Palace eventually became the Rajadamnern Avenue. The palace expanded Bangkok northwards, while the avenue accommodated further growth, the garden became the setting of residential houses belonging to the kings consorts and children. Chulalongkorn lived at the palace until his death at the Amphorn Sathan Residential Hall on 23 October 1910 of kidney disease 9. Buddhism โ€“ Buddhism is a religion and dharma that encompasses a variety of traditions, beliefs and spiritual practices largely based on teachings attributed to the Buddha. Buddhism originated in India sometime between the 6th and 4th centuries BCE, from where it spread through much of Asia, two major extant branches of Buddhism are generally recognized by scholars, Theravada and Mahayana. Buddhism is the worlds fourth-largest religion, with over 500 million followers or 7% of the global population, Buddhist schools vary on the exact nature of the path to liberation, the importance and canonicity of various teachings and scriptures, and especially their respective practices. In Theravada the ultimate goal is the attainment of the state of Nirvana, achieved by practicing the Noble Eightfold Path, thus escaping what is seen as a cycle of suffering. Theravada has a following in Sri Lanka and Southeast Asia. Mahayana, which includes the traditions of Pure Land, Zen, Nichiren Buddhism, Shingon, rather than Nirvana, Mahayana instead aspires to Buddhahood via the bodhisattva path, a state wherein one remains in the cycle of rebirth to help other beings reach awakening. Vajrayana, a body of teachings attributed to Indian siddhas, may be viewed as a branch or merely a part of Mahayana. Tibetan Buddhism, which preserves the Vajrayana teachings of eighth century India, is practiced in regions surrounding the Himalayas, Tibetan Buddhism aspires to Buddhahood or rainbow body. Buddhism is an Indian religion attributed to the teachings of Buddha, the details of Buddhas life are mentioned in many early Buddhist texts but are inconsistent, his social background and life details are difficult to prove, the precise dates uncertain. Some hagiographic legends state that his father was a king named Suddhodana, his mother queen Maya, and he was born in Lumbini gardens. Some of the stories about Buddha, his life, his teachings, Buddha was moved by the innate suffering of humanity. He meditated on this alone for a period of time, in various ways including asceticism, on the nature of suffering. He famously sat in meditation under a Ficus religiosa tree now called the Bodhi Tree in the town of Bodh Gaya in Gangetic plains region of South Asia. He reached enlightenment, discovering what Buddhists call the Middle Way, as an enlightened being, he attracted followers and founded a Sangha. Now, as the Buddha, he spent the rest of his teaching the Dharma he had discovered. Dukkha is a concept of Buddhism and part of its Four Noble Truths doctrine. It can be translated as incapable of satisfying, the unsatisfactory nature, the Four Truths express the basic orientation of Buddhism, we crave and cling to impermanent states and things, which is dukkha, incapable of satisfying and painful. This keeps us caught in saแนƒsฤra, the cycle of repeated rebirth, dukkha 10. British Empire โ€“ The British Empire comprised the dominions, colonies, protectorates, mandates and other territories ruled or administered by the United Kingdom and its predecessor states. It originated with the possessions and trading posts established by England between the late 16th and early 18th centuries. At its height, it was the largest empire in history and, for over a century, was the foremost global power. By 1913, the British Empire held sway over 412 million people, 23% of the population at the time. As a result, its political, legal, linguistic and cultural legacy is widespread, during the Age of Discovery in the 15th and 16th centuries, Portugal and Spain pioneered European exploration of the globe, and in the process established large overseas empires. Envious of the great wealth these empires generated, England, France, the independence of the Thirteen Colonies in North America in 1783 after the American War of Independence caused Britain to lose some of its oldest and most populous colonies. British attention soon turned towards Asia, Africa, and the Pacific, after the defeat of France in the Revolutionary and Napoleonic Wars, Britain emerged as the principal naval and imperial power of the 19th century. In the early 19th century, the Industrial Revolution began to transform Britain, the British Empire expanded to include India, large parts of Africa and many other territories throughout the world. In Britain, political attitudes favoured free trade and laissez-faire policies, during the 19th Century, Britains population increased at a dramatic rate, accompanied by rapid urbanisation, which caused significant social and economic stresses. To seek new markets and sources of raw materials, the Conservative Party under Benjamin Disraeli launched a period of imperialist expansion in Egypt, South Africa, Canada, Australia, and New Zealand became self-governing dominions. By the start of the 20th century, Germany and the United States had begun to challenge Britains economic lead, subsequent military and economic tensions between Britain and Germany were major causes of the First World War, during which Britain relied heavily upon its empire. The conflict placed enormous strain on the military, financial and manpower resources of Britain, although the British Empire achieved its largest territorial extent immediately after World War I, Britain was no longer the worlds pre-eminent industrial or military power. In the Second World War, Britains colonies in Southeast Asia were occupied by Imperial Japan, despite the final victory of Britain and its allies, the damage to British prestige helped to accelerate the decline of the empire. India, Britains most valuable and populous possession, achieved independence as part of a larger movement in which Britain granted independence to most territories of the empire. The transfer of Hong Kong to China in 1997 marked for many the end of the British Empire, fourteen overseas territories remain under British sovereignty. After independence, many former British colonies joined the Commonwealth of Nations, the United Kingdom is now one of 16 Commonwealth nations, a grouping known informally as the Commonwealth realms, that share a monarch, Queen Elizabeth II. The foundations of the British Empire were laid when England and Scotland were separate kingdoms. In 1496, King Henry VII of England, following the successes of Spain and Portugal in overseas exploration, Cabot led another voyage to the Americas the following year but nothing was ever heard of his ships again 11. French Indochina โ€“ French Indochina, officially known as the Indochinese Union after 1887 and the Indochinese Federation after 1947, was a grouping of French colonial territories in Southeast Asia. A grouping of the three Vietnamese regions of Tonkin, Annam, and Cochinchina with Cambodia was formed in 1887, Laos was added in 1893 and the leased Chinese territory of Guangzhouwan in 1898. The capital was moved from Saigon to Hanoi in 1902 and again to Da Lat in 1939, in 1945 it was moved back to Hanoi. After the Fall of France during World War II, the colony was administered by the Vichy government and was under Japanese occupation until March 1945, beginning in May 1941, the Viet Minh, a communist army led by Hแป“ Chรญ Minh, began a revolt against the Japanese. In August 1945 they declared Vietnamese independence and extended the war, known as the First Indochina War, in Saigon, the anti-Communist State of Vietnam, led by former Emperor Bแบฃo ฤแบกi, was granted independence in 1949. On 9 November 1953, the Kingdom of Laos and the Kingdom of Cambodia became independent, following the Geneva Accord of 1954, the French evacuated Vietnam and French Indochina came to an end. Franceโ€“Vietnam relations started as early as the 17th century with the mission of the Jesuit missionary Alexandre de Rhodes, at this time, Vietnam was only just beginning to occupy the Mekong Delta, former territory of the Indianised kingdom of Champa which they had defeated in 1471. European involvement in Vietnam was confined to trade during the 18th century, pigneau died in Vietnam but his troops fought on until 1802 in the French assistance to Nguyแป…n รnh. France was heavily involved in Vietnam in the 19th century, protecting the work of the Paris Foreign Missions Society in the country was presented as a justification. In 1858, the period of unification under the Nguyแป…n dynasty ended with a successful attack on Da Nang by French Admiral Charles Rigault de Genouilly under the orders of Napoleon III. Diplomat Charles de Montignys mission having failed, Genouillys mission was to stop attempts to expel Catholic missionaries and his orders were to stop the persecution of missionaries and assure the unimpeded propagation of the faith. In September 1858, fourteen French gunships,3,000 men and 300 Filipino troops provided by the Spanish attacked the port of Tourane, causing significant damage, after a few months, Rigault had to leave the city due to supply issues and illnesses. Sailing south, de Genouilly then captured the poorly defended city of Saigon on 18 February 1859, on 13 April 1862, the Vietnamese government was forced to cede the three provinces of Biรชn Hรฒa, Gia ฤแป‹nh and ฤแป‹nh Tฦฐแปng to France. French policy four years saw a reversal, with the French continuing to accumulate territory. In 1862, France obtained concessions from Emperor Tแปฑ ฤแปฉc, ceding three treaty ports in Annam and Tonkin, and all of Cochinchina, the latter being formally declared a French territory in 1864. In 1867 the provinces of Chรขu ฤแป‘c, Hร  Tiรชn and Vฤฉnh Long were added to French-controlled territory, in 1863, the Cambodian king Norodom had requested the establishment of a French protectorate over his country. France obtained control over northern Vietnam following its victory over China in the Sino-French War, French Indochina was formed on 17 October 1887 from Annam, Tonkin, Cochinchina and the Kingdom of Cambodia, Laos was added after the Franco-Siamese War in 1893. The federation lasted until 21 July 1954, French troops landed in Vietnam in 1858 and by the mid-1880s they had established a firm grip over the northern region The Bangkok city proper is highlighted in this satellite image of the lower Chao Phraya delta. Notice the built-up urban area along the Chao Phraya River, which extends northward and southward into Nonthaburi and Samut Prakan Provinces. Bangkok's major canals are shown in this map detailing the original course of the river and its shortcut canals.
2016 Nova Scotia Scotties Tournament of Hearts The 2016 Nova Scotia Scotties Tournament of Hearts, the provincial women's curling championship of Nova Scotia, was held from January 19 to 24 at the Mayflower Curling Club in Halifax. The winning Jill Brothers team represented Nova Scotia at the 2016 Scotties Tournament of Hearts in Grande Prairie, Alberta. Teams Teams are as follows: Round robin standings Results January 19 Draw 1 Jones 10-2 McEvoy Brothers 10-3 Pinkney Arsenault 9-5 Gamble Dwyer 7-4 Breen January 20 Draw 2 Arsenault 5-4 Pinkney Jones 8-6 Dwyer Breen 10-7 McEvoy Brothers 8-4 Gamble Draw 3 Breen 9-4 Gamble Arsenault 11-10 McEvoy Brothers 6-5 Dwyer Jones 6-3 Pinkney January 21 Draw 4 Arsenault 7-5 Brothers Jones 8-7 Breen Pinkney 10-8 Gamble Dwyer 7-5 McEvoy Draw 5 Gamble 8-2 Jones Brothers 7-6 McEvoy Arsenault 7-5 Dwyer Breen 9-8 Pinkney January 22 Draw 6 Breen 8-7 Brothers Gamble 7-4 Dwyer McEvoy 9-3 Pinkney Arsenault 9-3 Jones Draw 7 Dwyer 7-3 Pinkney Breen 10-5 Arsenault Brothers 8-6 Jones Gamble 6-5 McEvoy Playoffs Semifinal Saturday, January 22, 2:00pm Final Sunday, January 23, 10:00am References Category:2016 Scotties Tournament of Hearts Category:Curling in Nova Scotia Category:Sports competitions in Halifax, Nova Scotia Category:2016 in Nova Scotia Category:January 2016 sports events in Canada
Wednesday, June 12, 2013 He also says he has "faith in Hong Kongโ€™s rule of law". So he is still in Hong Kong. There are many commentators in the US and outside who say Hong Kong is an odd choice for anyone seeking liberty and freedom, and of all things, rule of law. There are some conservative talk show hosts in the US who tell their not-so-knowledgeable listeners that Hong Kong is the most repressive regime in the world. Edward Snowden says he wants to ask the people of Hong Kong to decide his fate after choosing the city because of his faith in its rule of law. The 29-year-old former CIA employee behind what might be the biggest intelligence leak in US history revealed his identity to the world in Hong Kong on Sunday. His decision to use a city under Chinese sovereignty as his haven has been widely questioned โ€“ including by some rights activists in Hong Kong. Snowden said last night that he had no doubts about his choice of Hong Kong. โ€œPeople who think I made a mistake in picking Hong Kong as a location misunderstand my intentions. I am not here to hide from justice; I am here to reveal criminality,โ€ Snowden said in an exclusive interview with the South China Morning Post. โ€œI have had many opportunities to flee HK, but I would rather stay and fight the United States government in the courts, because I have faith in Hong Kongโ€™s rule of law,โ€ he added. Snowden says he has committed no crimes in Hong Kong and has โ€œbeen given no reason to doubt [Hong Kongโ€™s legal] systemโ€. โ€œMy intention is to ask the courts and people of Hong Kong to decide my fate,โ€ he said. Snowden, a former employee of US government contractor Booz Allen Hamilton who worked with the National Security Agency, boarded a flight to Hong Kong on May 20 and has remained in the city ever since. His astonishing confession on Sunday sparked a media frenzy in Hong Kong, with journalists from around the world trying to track him down. It has also caused a flurry of debate in the city over whether he should stay and whether Beijing will seek to interfere in a likely extradition case. The Hong Kong government has so far refused to comment on Snowdenโ€™s case. While many Hong Kong lawmakers, legal experts, activisits and members of the public have called on the cityโ€™s courts to protect Snowdenโ€™s rights, others such as Beijing loyalist lawmaker and former security chief Regina Ip Lau Suk-yee said he should leave. Local activists plan to take to the streets on Saturday in support of Snowden. Groups including the Civil Human Rights Front and international human rights groups will march from Chater Gardens in Central to the US consulate on Garden Road, starting at 3pm. The march is being organised by In-media, a website supporting freelance journalists. โ€œWe call on Hong Kong to respect international legal standards and procedures relating to the protection of Snowden; we condemn the US government for violating our rights and privacy; and we call on the US not to prosecute Snowden,โ€ the group said in a statement. (Full article at the link) As far as my personal experience goes, economic freedom exists in Hong Kong. People in Hong Kong have stood up for what they see as injustice and violation of human rights on numerous occasions. People I've encountered (mostly business people) are urbane, confident, and open. Too bad Snowden is too young to understand that only certain types of criminality are really criminal. Stealing tens, thousands or millions of $$$ is criminal. Stealing Billions is not criminal. Killing a few people with a gun or a pressure cooker bomb; very criminal - killing hundreds of thousands by overthrowing a government and invading a country - not criminal. Polluting your home by cooking up some meth; criminal, criminal, criminal - polluting your country and the Pacific Ocean with radiation; not criminal. Dumping used motor oil down the drain; criminal - Allowing an open oil well to spew for several years under water; not criminal Taking video of your local cops beating up some poor schmuck at a protest; criminal - keeping records of all digital communications all the time; not criminal Publishing whistleblower information about government corruption on the internet: criminal - Completely trashing the intent of the US Constitution and first denying it, then when your lies are exposed saying it's in the name of "security"; not criminal... About my coverage of Japan Earthquake of March 11 I am Japanese, and I not only read Japanese news sources for information on earthquake and the Fukushima Nuke Plant but also watch press conferences via the Internet when I can and summarize my findings, adding my observations. About This Site Well, this was, until March 11, 2011. Now it is taken over by the events in Japan, first earthquake and tsunami but quickly by the nuke reactor accident. It continues to be a one-person (me) blog, and I haven't even managed to update the sidebars after 5 months... Thanks for coming, spread the word.------------------This is an aggregator site of blogs coming out of SKF (double-short financials ETF) message board at Yahoo. Along with commentary on day's financial news, it also provides links to the sites with financial and economic news, market data, stock technical analysis, and other relevant information that could potentially affect the financial markets and beyond. Disclaimer: None of the posts or links is meant to be a recommendation, advice or endorsement of any kind. The site is for information and entertainment purposes only.
Q: How can I turn this DataFrame into a DataFrame with average score by Index Value? I have the below DataFrame, with wine variety, reviewer and score. I'd like to make a new DataFrame that outputs variety as the column labels and lists the average score by reviewer and variety. Simply stated I'd like to output a DataFrame with variety at the top and reviewer as the index with the average score by reviewer and variety. I've tried several things, and I can't get it to work. The actual information I will be a lot more reviewers with a lot more varieties, but I wanted to provide a simplified version. Any help would be appreciated. Thank you in advance. import pandas as pd df = pd.DataFrame({"Variety": ['Cabernet', 'Pinot', 'Cabernet', 'Pinot', 'Pinot', 'Cabernet', 'Pinot', 'Cabernet'], "Reviewer": ['Bill', 'Sally', 'Bill', 'Sally', 'Bill', 'Sally', 'Bill', 'Sally'], "Score": [90, 85, 87, 93, 80, 81, 93, 88]}) A: More like a pivot problem pd.pivot_table(df,index='Reviewer',columns='Variety',values='Score',aggfunc='mean') Out[29]: Variety Cabernet Pinot Reviewer Bill 87.000000 87.666667 Sally 84.666667 93.000000
As we work to bring even more value to our audience, weโ€™ve made important changes for those who receive Ad Age with our compliments. As of November 15, 2016 we will no longer be offering full digital access to AdAge.com. However, we will continue to send you our industry-leading print issues focused on providing you with what you need to know to succeed. If youโ€™d like to continue your unlimited access to AdAge.com, we invite you to become a paid subscriber. Get the news, insights and tools that help you stay on top of whatโ€™s next. THE MARKETING 100: SNICKERS: SANTA CRUZ HUGHES Santa Cruz Hughes knew she was onto something when she got a call from her daughter at school. The incident she related: Kids in her class were picking up the phrase, "Not going anywhere for a while? Have a Snickers." That was music to the ears of Ms. Hughes, 37, the senior marketing manager at M&M/Mars behind that successful campaign. Working on the brand for four years, Ms. Hughes has helped shape the "Hungry? Why wait" strategy that has received creative accolades while substantially increasing consumption among its target group of young males. Ms. Hughes says the goal of the campaign was to move forward Snickers' longtime theme of hunger satisfaction, initially executed by Bates Worldwide, New York. When the agency was replaced in 1995 by BBDO Worldwide, New York, the agency and client decided to keep the product's positioning, she says, but give the "Snickers satisfies you" approach a less serious tact. The goal was also to target the brand more closely to males age 18 to 22. "We moved from a preoccupying hunger to satisfying hunger in a completely enjoyable way. From that the campaign was born," says Ms. Hughes, who adds that the result has been "a big selling idea"-an idea, according to Information Resources Inc., that's sold $277 million for the 52 weeks ending March 2, up 2.3% from the previous year. The first spot began in September '95 and featured Buffalo Bills coach Marv Levy-who had taken the team to the Super Bowl four times without a win-threatening the team couldn't leave the room without a commitment to win. From there followed other executions including the "Team prayer" spot that good-naturedly took political correctness to new highs and the commercial that first introduced the bumbling, taciturn Clarence character, who accidentally freezes himself to a hockey rink and misspells the name Chiefs on a football field. Media also shifted to reach the target. The mix was changed to 60% sports programming, and Ms. Hughes says :15s were created out of the :60s to spread the budget-measured at $42.2 million by Competitive Media Reporting in '96-over expensive sports buys. The plan this year is to take the campaign to print and radio. Although in her nine years at Mars she's worked on several brands, including M&Ms, Skittles and Starburst, Ms. Hughes says Snickers might be the most enjoyable. "It's everyone's favorite. And as a Latino female, I'm drawn to it because it's a brand that embraces all people," she says.
15 graduate in first class of advanced manufacturing training program TURNERS FALLS โ€” In his seven years working as a machinist for the Athol-based L.S. Starrett Co., 48-year-old Orange resident Robert Donnelly has witnessed a manufacturing industry thatโ€™s abandoning manual machines and relying more on computer programming to power production. Now, after completing the inaugural 11-week, 220-hour Middle Skills Manufacturing Initiative training program โ€” a partnership between local manufacturing businesses, the Franklin County Technical School, Greenfield Community College and the Franklin Hampshire Regional Employment Board โ€” Donnelly feels confident he can take on new duties at work and is even thinking of starting his own contracting business on the side. Fifteen men from across western Massachusetts graduated from the program Tuesday and will enter a local workforce that desperately needs trained labor. Manufacturers have said they have had to turn away 90 percent of potential work because they lacked qualified employees. โ€œI canโ€™t tell you what I feel like knowing that I have a group of people here that are skilled, theyโ€™ve been primed and theyโ€™re ready to hire,โ€ said Cody Sisson, CEO of Sisson Engineering Corp. in Northfield. โ€œThat hasnโ€™t happened to me in decades.โ€ The programโ€™s first graduation was held at the Tech School โ€” nearly one year after Greenfield business owner Steven Capshaw announced plans to raise $500,000 to buy new advanced manufacturing machines for that school. His efforts โ€œtriggered a chain reaction,โ€ said Sisson, that ultimately brought over $800,000 to Franklin County from public and private sources. The Tech Schoolโ€™s machine shop was completely refurbished and Greenfield Community College began to develop an advanced manufacturing curriculum targeted at unemployed and underemployed workers in the local community. A total of 93 applicants applied to be in the programโ€™s first cohort, and the regional employment board whittled that group down to 15. A state grant is paying for three more classes of that size to go through the training for free, with the next session starting in February. As pleased as organizers were on Tuesday, they also acknowledged their work has just begun. Five of the graduates have already secured jobs but the regional employment board will be looking to help the other 10 find positions. Organizers want to see women well represented in future classes. And the project-based curriculum, which was taught by 11 individuals, could be smoothed out in future runs, said educators, who said they were constantly adjusting pieces as the weeks progressed. Local state representatives Denise Andrews, D-Orange, and Paul Mark, D-Peru, were among the attendees at Tuesdayโ€™s graduation. โ€œYou make the difference in peopleโ€™s lives. Never forget that,โ€ said Andrews. GREENFIELD โ€” Organizers of an advanced machining program designed to jump-start the areaโ€™s manufacturing industry are searching for their next group of students. The 12-week, 288-hour training program โ€ฆ 0
1. Field of the Invention The present invention relates to decrease in the resistance of an upper transparent electrode of a self-emitting organic electroluminescent device. 2. Description of the Related Art A self-emitting organic electroluminescent element (hereinafter referred to as an โ€œorganic light emitting elementโ€) is expected as an illumination device for a thin-screen display device and a liquid crystal display device. An organic light emitting display device includes a plurality of organic light emitting elements constituting pixels on a substrate and a driving layer for driving the organic light emitting elements. The organic light emitting element has a structure in which a plurality of organic layers are put between a lower electrode and an upper transparent electrode. The plurality of organic layers include at least a transport layer for transporting electron holes, a transport layer for transporting electrons and a light emitting layer for re-combining the electron holes and the electrons. The holes and electrons injected from electrodes are re-combined in the light emitting layer to emit light by application of a voltage between both of the electrodes. In the general structure, the transport layer is formed over the entire display panel region and used in common as a transport layer for a plurality of organic light emitting elements. With the constitution described above, only the light emitting layer requires patterning comparable with that of a pixel size. For the patterning comparable with that of the pixel size, a precision mask is generally used. Since the precision mask involves a problem of lowering the mass productivity due to mask exchange, etc., it is preferred to decrease the number of its use. In a usual organic light emitting display device, the upper transparent electrode of the organic light emitting element is used as a common electrode. Accordingly, when a plurality of organic light emitting elements emit light, the entire current for the display panel flows in the upper transparent electrode. In the case of using a highly-resistive transparent conductive film for the upper transparent electrode, unevenness in applied voltages occurs between a pixel comprising an organic light emitting element at the periphery of the display panel near the power source and a pixel comprising an organic light emitting element at the central portion of the display panel due to the wiring resistance caused of the upper transparent electrode, which, as a result, generates unevenness of luminance. JP-A-2004-207217 discloses the constitution of an organic light emitting display device using an upper transparent electrode. Auxiliary wiring is formed in a layer level with that of a lower electrode of an organic light emitting element. The auxiliary wiring and the upper transparent electrode are connected in a pixel region to decrease unevenness of the wiring resistance in every pixel. For electrically connecting the auxiliary wiring and the upper transparent electrode, it is necessary to remove all the organic layers constituting the organic light emitting element at contact hole portions. Accordingly, for the entire organic layer, a transport layer as well as the light emitting layer has also to be formed by using a precision mask, which results in a problem of lowering the mass productivity.
Q: Initially hide elements when using angular.js In the example below I do not want the values "A" og "B" to be visible until JavaScript has been loaded and $scope.displayA has been set by the return of some ajax call. <span ng-show="displayA">A</span> <span ng-hide="displayA">B</span> What is the best way to achieve this? A: Just use ng-cloak on them. Link to docs: http://docs.angularjs.org/api/ng.directive:ngCloak
โ€˜Saksiscopyeโ€™ originated from a hybridization of proprietary hybrid Osteospermum breeding line โ€˜203005โ€™ (unpatented) and commercial hybrid Osteospermum line โ€˜Sunny Amandaโ€™ (U.S. Plant Pat. No. 16,522) in Aabyhoej, Denmark. The male parent, โ€˜Sunny Amandaโ€™, has a pale-yellow flower color with terracotta-brown at the flower petal apices, medium flower size and brown disc florets. The female parent, โ€˜203005โ€™ has a bright-yellow flower color, medium flower size and a compact and less branching plant growth habit. In spring 2003, the two Osteospermum lines were crossed and 552 seeds were obtained. The seeds were sown and 442 plants were grown in pots for evaluation. Out of 442 F1 lines, plant number 206 was selected for its pale-yellow flowers with copper tips, medium flower size and a compact plant growth habit. In spring 2004, plant number 206 was vegetatively propagated with cuttings and re-evaluated in an open field and a greenhouse. Plant number 206 was given the code number โ€˜204066โ€™. In spring 2005, plants were evaluated again in pots and in an open field. The selection โ€˜204066โ€™ was named โ€˜Saksiscopyeโ€™ and found to retain its distinctive characteristics through successive asexual propagations.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <wodefinitions> <wo class="ERD2WConfirmPageTemplate" wocomponentcontent="false"> </wo> </wodefinitions>
[Postoperative astigmatism. 3.5 mm scleral tunnel incision and implantation of a HEMA posterior chamber lens vs 7 mm scleral step incision and implantation of a PMMA posterior chamber lens]. In a prospective study, two groups of 35 patients each were compared following phacoemulsification and posterior lens implantation. Both groups were followed up to evaluate the evolution of the postoperative astigmatism during a minimum of 6 months. In group A, a HEMA posterior chamber lens was implanted through a 3.5 mm scleral tunnel incision. In group B, a PMMA posterior chamber lens was implanted through a 7 mm scleral step incision. The data were analyzed for the whole observation time with reference to preoperative, early postoperative, absolute and induced astigmatism. Different subgroups were formed. Vector analysis was performed in both groups in order to determine surgically induced axial changes, e.g., the intensity and direction of the power working on the cornea. The results were compared. Group A showed lesser early postoperative astigmatism than group B; however, group A also returned to the preoperative values more quickly. Both groups exhibited a shift towards against-the-rule astigmatism.
#!/usr/bin/env node // This script runs some simple Web3 calls. // Useful for validating the published version in different OS environments. const Web3 = require('web3'); const util = require('util'); const log = console.log; async function delay(secs=0){ return new Promise(resolve => setTimeout(() => resolve(), secs * 1000)) } // A workaround for how flaky the infura connection can be... // Tries to fetch data 10x w/ 1 sec delays. Exits on first success. async function getBlockWithRetry(web3){ let i = 0; let block; while(true){ await delay(1); try { block = await web3.eth.getBlock('latest'); break; } catch(err){ i++; if (i === 10){ throw new Error('Failed to connect to Infura over websockets after 10 tries'); } } } return block; } async function main(){ let web3; let block; // Providers log(); log('>>>>>>'); log('HTTP:MAINNET getBlock'); log('>>>>>>'); // Http web3 = new Web3('https://mainnet.infura.io/v3/1d13168ffb894ad2827f2152620bd27c'); block = await getBlockWithRetry(web3); log(util.inspect(block)); log(); log('>>>>>>'); log('WS:MAINNET getBlock'); log('>>>>>>'); // WebSockets web3 = new Web3('wss://mainnet.infura.io/ws/v3/1d13168ffb894ad2827f2152620bd27c'); block = await getBlockWithRetry(web3); web3.currentProvider.disconnect(); log(util.inspect(block)); // Accounts web3 = new Web3(); log(); log('>>>>>>'); log('eth.accounts.createAccount'); log('>>>>>>'); const account = web3.eth.accounts.create(); log(util.inspect(account)); log(); log('>>>>>>'); log('eth.accounts.hashMessage'); log('>>>>>>'); const hash = web3.eth.accounts.hashMessage('Hello World'); log(util.inspect(hash)); } main() .then(() => process.exit(0)) .catch(err => { log(err); process.exit(1) });
The workersโ€™ movement as a whole is dissatisfied with this situation and is attempting to improve it by exerting influence on determining working conditions and by demanding government intervention. According to its perception, the current authorities of government and society are not only not providing the movement with the assistance it has called for but are also erecting barriers to its demands for equal rights in its economic struggles. For this reason, many wage laborers are hostile towards these authorities and their responsible institutions. These workers have distanced themselves as a class from all the other social groups in the state, are now engaged in class struggle, and are convinced that an improvement in their situation can be brought about only through the actions of the workersโ€™ movement itself. A deep rift has opened between this segment of wage laborers and the rest of the population of our Fatherland, making mutual understanding almost impossible; and only recently over the past decade have a few bridges been built, now and then, upon which a reconciliation may be possible. There can be no doubt that the inner peace of our Fatherland has been shattered and endangered in the most serious manner. In light of this very challenging situation, we, who have come together in the Society for Social Reform have taken on the following dual task: First, to work in a careful yet consistent and energetic fashion to improve the inadequate situation of the wage laborers, to eliminate misery from the lives of the working classes, to progressively increase the number of workers whose lives are not fully consumed by the struggle for existence, and thus: Second, to eliminate the discontent among wage laborers by striving to eliminate the causes of this discontent, and, consequently, to give the laboring workforce the conviction that it does not stand completely alone, opposed to all other social classes, in its struggle for a better existence, and, in short, to restore the inner peace of our Fatherland. We oppose all use of force and coercion against the workersโ€™ movement as long as this movement does not violate current criminal law. Furthermore, we want to see this movement protected by the rule of law, and we strive for this in the firm conviction, bolstered by the experience we have gained here in Germany, even in light of the so-called Anti-Socialist Law, that while it may be possible to achieve temporary successes and to address superficial symptoms of our social problems with the use of force and coercion, through these means we can never change attitudes. [ . . . ] Original German text reprinted in Ernst Schraepler, ed., Quellen zur Geschichte der sozialen Frage in Deutschland. 1871 bis zur Gegenwart [Sources on the History of the Social Question in Germany. 1871 to the Present]. 3rd edition. Gรถttingen, 1996, pp. 54-57.
1. Field of the Invention The present invention relates to an improvement for bottom up window shades or blinds for use in residential or commercial applications as described herein. The shade mechanisms disclosed herein are ideally disposed to applications involving nonrectangular window shapes such as triangular frames, arches, arcuate sections, and other partial or full elliptical forms and an improved method of eliminating small gaps between the shade and lintel or between the shade and side frame. Additionally the within shade invention improves the appearance of pull up shades by including a valance that hides the shade material and mechanism and also holds the shade material when it is down. 2. Discussion of the Background The improvements set out herein addresses and help remove gaps that can occur between the top or sides of pull up shades as described herein and in U.S. Pat. No. 6.478.071. Due to gravity the weight of the shade material can cause the top of the shade, which meets the lintel, or the sides of the shade to be pulled down, leaving a gap between the top and the lintel surface or between the side of the shade and the side frame. This can occur in arched, square, triangular or trapezoid openings. The unique design of the headrail and support rods is used to avoid these problems. On bottom up type shades a further improvement to the shade is the use of a valance as part of the shade, which hides and protects the bottom draw cord mechanism as well as the shade material when it is down. Further, it acts as a holder for the shade and headrail mechanism when it is down, and simplifies installation and the workings mechanism.
Spatial distribution and temporal variability of arsenic in irrigated rice fields in Bangladesh. 2. Paddy soil. Arsenic-rich groundwater from shallow tube wells is widely used for the irrigation of boro rice in Bangladesh and West Bengal. In the long term this may lead to the accumulation of As in paddy soils and potentially have adverse effects on rice yield and quality. In the companion article in this issue, we have shown that As input into paddy fields with irrigation water is laterally heterogeneous. To assess the potential for As accumulation in soil, we investigated the lateral and vertical distribution of As in rice field soils near Sreenagar (Munshiganj, Bangladesh) and its changes over a 1 year cycle of irrigation and monsoon flooding. At the study site, 18 paddy fields are irrigated with water from a shallow tube well containing 397 +/- 7 microg L(-1) As. The analysis of soil samples collected before irrigation in December 2004 showed that soil As concentrations in paddy fields did not depend on the length of the irrigation channel between well and field inlet. Within individual fields, however, soil As contents decreased with increasing distance to the water inlet, leading to highly variable topsoil As contents (11-35 mg kg(-1), 0-10 cm). Soil As contents after irrigation (May 2005) showed that most As input occurred close to the water inlet and that most As was retained in the top few centimeters of soil. After monsoon flooding (December 2005), topsoil As contents were again close to levels measured before irrigation. Thus, As input during irrigation was at least partly counteracted by As mobilization during monsoon flooding. However, the persisting lateral As distribution suggests net arsenic accumulation over the past 15 years. More pronounced As accumulation may occur in regions with several rice crops per year, less intense monsoon flooding, or different irrigation schemes. The high lateral and vertical heterogeneity of soil As contents must be taken into account in future studies related to As accumulation in paddy soils and potential As transfer into rice.
// Made with milk and cookies by Nicholas Ruggeri and Gianmarco Simone // https://github.com/nicholasruggeri/cookies-enabler // https://github.com/nicholasruggeri // https://github.com/gsimone window.COOKIES_ENABLER = window.COOKIES_ENABLER || (function () { 'use strict' var defaults = { scriptClass: 'ce-script', iframeClass: 'ce-iframe', acceptClass: 'ce-accept', disableClass: 'ce-disable', dismissClass: 'ce-dismiss', bannerClass: 'ce-banner', bannerHTML: document.getElementById('ce-banner-html') !== null ? document.getElementById('ce-banner-html').innerHTML : '<p>This website uses cookies. ' +'<a href="#" class="ce-accept">' +'Enable Cookies' +'</a>' +'</p>', eventScroll: false, scrollOffset: 200, clickOutside: false, cookieName: 'ce-cookie', cookieDuration: '365', wildcardDomain: false, iframesPlaceholder: true, iframesPlaceholderHTML: document.getElementById('ce-iframePlaceholder-html') !== null ? document.getElementById('ce-iframePlaceholder-html').innerHTML : '<p>To view this content you need to' +'<a href="#" class="ce-accept">Enable Cookies</a>' +'</p>', iframesPlaceholderClass: 'ce-iframe-placeholder', onEnable: '', onDismiss: '', onDisable: '' }, opts, domElmts, start_Y; function _extend() { var i, key; for (i=1; i<arguments.length; i++) for (key in arguments[i]) if(arguments[i].hasOwnProperty(key)) arguments[0][key] = arguments[i][key]; return arguments[0]; } function _debounce(func, wait, immediate) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if (!immediate) func.apply(context, args); }; var callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; } function _getClosestParentWithClass(el, parentClass) { do { if (_hasClass(el, parentClass)) { // tag name is found! let's return it. :) return el; } } while (el = el.parentNode); return null; } function _hasClass(el, cls) { return (' ' + el.className + ' ').indexOf(' ' + cls + ' ') > -1; } var handleScroll = function() { if (Math.abs(window.pageYOffset - start_Y) > opts.scrollOffset) enableCookies(); }; var bindUI = function() { domElmts = { accept: document.getElementsByClassName(opts.acceptClass), disable: document.getElementsByClassName(opts.disableClass), banner: document.getElementsByClassName(opts.bannerClass), dismiss: document.getElementsByClassName(opts.dismissClass) } var i, accept = domElmts.accept, accept_l = accept.length, disable = domElmts.disable, disable_l = disable.length, dismiss = domElmts.dismiss, dismiss_l = dismiss.length; if (opts.eventScroll) { window.addEventListener('load', function(){ start_Y = window.pageYOffset; window.addEventListener('scroll', handleScroll); }); } if (opts.clickOutside) { document.addEventListener("click", function(e){ var element = e.target; // ignore the click if it is inside of any of the elements created by this plugin if( _getClosestParentWithClass(element, opts.iframesPlaceholderClass) || _getClosestParentWithClass(element, opts.disableClass) || _getClosestParentWithClass(element, opts.bannerClass) || _getClosestParentWithClass(element, opts.dismissClass) || _getClosestParentWithClass(element, opts.disableClass) ){ return false; } enableCookies(); }); } for (i = 0; i < accept_l; i++) { accept[i].addEventListener("click", function(ev) { ev.preventDefault(); enableCookies(ev); }); } for (i = 0; i < disable_l; i++) { disable[i].addEventListener("click", function(ev) { ev.preventDefault(); disableCookies(ev); }); } for (i = 0; i < dismiss_l; i++) { dismiss[i].addEventListener("click", function (ev) { ev.preventDefault(); banner.dismiss(); }); } }; var init = function(options) { opts = _extend({}, defaults, options); if (cookie.get() == 'Y') { if (typeof opts.onEnable === "function") opts.onEnable(); scripts.get(); iframes.get(); } else if (cookie.get() == 'N'){ if (typeof opts.onDisable === "function") opts.onDisable(); iframes.hide(); bindUI(); } else { banner.create(); iframes.hide(); bindUI(); } }; var enableCookies = _debounce(function(event) { if (typeof event != "undefined" && event.type === 'click'){ event.preventDefault(); } if (cookie.get() != 'Y') { cookie.set(); scripts.get(); iframes.get(); iframes.removePlaceholders(); banner.dismiss(); window.removeEventListener('scroll', handleScroll); if (typeof opts.onEnable === "function") opts.onEnable(); } }, 250, false); var disableCookies = function(event){ if (typeof event != "undefined" && event.type === 'click'){ event.preventDefault(); } if (cookie.get() != 'N'){ cookie.set('N'); banner.dismiss(); window.removeEventListener('scroll', handleScroll); if (typeof opts.onDisable === "function") opts.onDisable(); } } var banner = (function() { function create() { var el = '<div class="'+ opts.bannerClass +'">' + opts.bannerHTML +'</div>'; document.body.insertAdjacentHTML('beforeend', el); } function dismiss(){ domElmts.banner[0].style.display = 'none'; if (typeof opts.onDismiss === "function") opts.onDismiss(); } return{ create: create, dismiss: dismiss } })(); var cookie = (function() { function set(val){ var value = typeof val !== "undefined" ? val : "Y", date, expires, host, domainParts, domain; if (opts.cookieDuration) { date = new Date(); date.setTime(date.getTime()+(opts.cookieDuration*24*60*60*1000)); expires = "; expires="+date.toGMTString(); } else { expires = ""; } host = location.hostname; // Means localhost or that the user does not want to enable cookies for all subdomains if(host.split('.') === 1 || !opts.wildcardDomain) { document.cookie = opts.cookieName +"="+ value+expires +"; path=/"; } else { // We start by stying to set a cookie from a subdomain eg foo.bar.com -> .bar.com // If that does not work we try to set it for the top domain instead domainParts = host.split('.'); domainParts.shift(); domain = '.' + domainParts.join('.'); document.cookie = opts.cookieName +"="+ value+expires +"; path=/; domain="+domain; // Check if we managed to set the cookie, if not we where on a top-domain if( cookie.get() == null ) { domain = '.'+host; document.cookie = opts.cookieName +"="+ value+expires +"; path=/; domain="+domain; } } } function get(){ var cookies = document.cookie.split(";"), l = cookies.length, i, x, y; for (i = 0; i < l; i++){ x = cookies[i].substr(0,cookies[i].indexOf("=")); y = cookies[i].substr(cookies[i].indexOf("=")+1); x = x.replace(/^\s+|\s+$/g,""); if (x == opts.cookieName) { return unescape(y); } } } return{ set: set, get: get } })(); var iframes = (function() { function makePlaceholder(iframe) { var placeholderElement = document.createElement('div'); placeholderElement.className = opts.iframesPlaceholderClass; placeholderElement.innerHTML = opts.iframesPlaceholderHTML; iframe.parentNode.insertBefore(placeholderElement, iframe); } function removePlaceholders() { var iframePlaceholders = document.getElementsByClassName(opts.iframesPlaceholderClass), n = iframePlaceholders.length, i; for (i = n - 1; i >= 0; i--){ iframePlaceholders[i].parentNode.removeChild(iframePlaceholders[i]); } } function hide() { var iframes = document.getElementsByClassName(opts.iframeClass), n = iframes.length, src, iframe, i; for (i = 0; i < n; i++){ iframe = iframes[i]; iframe.style.display = 'none'; if (opts.iframesPlaceholder) makePlaceholder(iframe); } } function get() { var iframes = document.getElementsByClassName(opts.iframeClass), n = iframes.length, src, iframe, i; for (i = 0; i < n; i++){ iframe = iframes[i]; src = iframe.attributes[ 'data-ce-src' ].value; iframe.src = src; iframe.style.display = 'block'; } } return{ hide: hide, get: get, removePlaceholders: removePlaceholders } })(); var scripts = (function() { function get() { var scripts = document.getElementsByClassName(opts.scriptClass), n = scripts.length, documentFragment = document.createDocumentFragment(), i, y, s, attrib, el; for (i = 0; i < n; i++){ if (scripts[i].hasAttribute('data-ce-src')){ if (typeof postscribe !== "undefined"){ postscribe(scripts[i].parentNode, '<script src="' + scripts[i].getAttribute("data-ce-src") + '"></script>'); } } else { s = document.createElement('script'); s.type = 'text/javascript'; for (y = 0; y < scripts[i].attributes.length; y++) { attrib = scripts[i].attributes[y]; if (attrib.specified) { if ((attrib.name != 'type') && (attrib.name != 'class')){ s.setAttribute(attrib.name, attrib.value); } } } s.innerHTML = scripts[i].innerHTML; documentFragment.appendChild(s); } } document.body.appendChild(documentFragment); } return{ get: get } })(); return { init: init, enableCookies: enableCookies, dismissBanner: banner.dismiss }; }());
419 F.2d 1282 73 L.R.R.M. (BNA) 2199 NATIONAL LABOR RELATIONS BOARD, Petitioner,v.TEAMSTERS, CHAUFFEURS, HELPERS AND TAXICAB DRIVERS, LOCALUNION 327, Affiliated with InternationalBrotherhood of Teamsters, Chauffeurs,Warehousemen and Helpers ofAmerica, Respondent. No. 19483. United States Court of Appeals Sixth Circuit. Jan. 15, 1970. James P. Hendricks, N.L.R.B., Washington, D.C., Arnold Ordman, General Counsel, Dominick L. Manoli, Associate General Counsel, Marcel Mallet-Prevost, Asst. General Counsel, Herman M. Levy, Leon M. Kestenbaum, Attys., N.L.R.B., Washington, D.C., on brief, for petitioner. Hugh C. Howser, Nashville, Tenn., for respondent. Before PHILLIPS, Chief Judge, and EDWARDS and PECK, Circuit Judges. PER CURIAM. 1 In this case the National Labor Relations Board seeks this court's enforcement of its order, 173 N.L.R.B. No. 220, entered against the Teamsters, Chauffeurs, Helpers and Taxicab Drivers, Local 327. 2 After a Teamsters strike at Hartmann Luggage Company which, according to the record, was accompanied by many acts of picket line violence, both the Trial Examiner and the Board found violations of 8(b)(1)(A) of the National Labor Relations Act, 29 U.S.C. 158(b)(1)(A) (1964), and ordered the union to cease and desist. 3 The Board also, however, broadened the cease and desist order so that it applied not only to Hartmann but to any employer within the whole jurisdiction of Local 327. Citing one of its own cases (Teamsters Local 327 (Greer Stop Nut Co.), 160 N.L.R.B. 1919 (1966)), the Board found a 'proclivity' for violence. 4 The union, resisting the Board's application for enforcement, attacks only that portion of the order which extends beyond applicability to Hartmann. The order provides: 5 'Upon the foregoing findings of fact and conclusions of law, and the entire record, it is hereby recommended that Respondent Teamsters, Chauffeurs, Helpers and Taxicab Drivers, Local Union 327, affiliated with International Brotherhood of Teamsters, Chauffeurs, Warehousemen and Helpers of America, its officers, agents, representatives, successors, and assigns, shall: 6 '1. Cease and desist from: 7 '(a) Restraining or coercing the employees of Hartmann Luggage Company or the employees of any other employer within its jurisdictional territory in the exercise of the rights guaranteed them by Section 7 of the Act, by mass picketing blocking ingress and egress of employees at the Employer's premises, injuring employees entering or leaving those premises, damaging automobiles and trucks of employees and others entering or leaving those premises, spreading tacks on the Company driveways, or threatening employees with physical injury because they cross or wish to cross the picket lines. 8 '(b) In any like or similar manner restraining or coercing employees in the exercise of the rights guaranteed by Section 7 of the Act.' 9 For a court, which seriously contemplates enforcing those orders which it enters, the proposed order offers both conceptual and practical problems of great moment. 10 First, this order is addressed to mortal human beings, yet it has no limitation in time. Second, the order is both too broad and too vague in relation to persons expected to obey it (presumably on pain of contempt proceedings). Third, the order does not define the jurisdiction of Local 327 and thus it provides no means of defining the people for whom protection is sought. In all of these respects we believe the italicized portions of the order violate the provisions of Rule 65(d) of the Federal Rules of Civil Procedure. 11 This court could, of course, simply modify the order to eliminate the lack of specificity. Communications Workers of America v. NLRB, 362 U.S. 479, 80 S.Ct. 838, 4 L.Ed.2d 896 (1960); NLRB v. Express Publishing Co., 312 U.S. 426, 61 S.Ct. 693, 85 L.Ed. 930 (1941). 12 Contrary, however, to the union's contentions, we believe that the Board could take judicial notice of its own case involving the same local union, Teamsters Local 327 (Greer Stop Nut Co.), 160 NLRB 1919 (1966), and that there was substantial evidence on the whole record to justify the Board's finding that the union had demonstrated a proclivity to engage in violent conduct, similar to that found in the instant case. Under these circumstances, the Board may be justified in issuing a broad order, which is not limited to union activity at the Hartmann Luggage Company. NLRB v. Locals 138, 138A, 138B, 377 F.2d 528 (2d Cir. 1967). But, of course, such an order would have to be consonant with the specificity referred to above as to time, geographic area and persons to be affected. 13 The italicized portions of the order of the Board quoted above are deleted, and the order is enforced as modified. The case is remanded to the Board for further proceedings in accordance with this opinion.
The ban is a lie. Despite the UK government declaring a โ€œcomplete ban on evictionsโ€ due to the ongoing COVID-19 pandemic, in the last 24 hours an autonomous homeless shelter in Brighton and an occupied space in Peckham have been illegally evicted by people claiming to be bailiffs, allegedly with the full support and cooperation of the Sussex and Metropolitan police officers in attendance. The governmentโ€™s no evictions claim is really just the abdication of due process and the scant judicial protections formerly afforded to tenants, squatters and the under-class in general. Get ready. The bailiffs and their bosses are taking the law into their own hands, with the police in full support. Freedom received the following statement from one of the occupants of the Peckham squat: โ€œOn Thursday 2 April 2020 at approximately 17:00, police arrived at 1 Rye Lane in Peckham after being called by a group of 2 men who had arrived at approximately 16:03 and who had claimed to be the owners of this non-residential property without proof. The two men had been antagonising and threatening our group of 5 women and 1 man for occupying the building. We had been occupying number 1 Rye Lane since Wednesday 25 March 2020 and the police were aware of this, as around 12 police officers had arrived on Friday, 27 March, in 4 cars. They had gained entry to the property by taking out metal fencing and a wooden door. The police had gained entry without a warrant and had left after seeing our legal warning that highlighted that we were occupying the building and also highlighted the squatting laws. The legal warning stipulates that the government will not seek to criminalise squatting in non-residential buildings, such as disused factories, warehouses or pubs; that we were occupying the property, and at all times there is at least one person in occupation; that any entry or attempt to enter into these premises without our permission is therefore a criminal offence, as any one of us who is in physical possession is opposed to such entry without our permission; that if anyone attempts to enter by violence or by threatening violence can be prosecuted and may receive a sentence of up to six months imprisonment and/or a fine of up to ยฃ5,000; that if you want to get us out you will have to issue a claim for possession in the County Court or in the High Court. We believe that the entry by the police on Friday 27 March while we were occupying the building would have been caught by some of the numerous CCTV cameras in the area. When the police had illegally gained entry into the building they found us inside, at which point we explained that we were occupying it under squatting laws and they established this by seeing our sleeping materials. They left us at that point as they seemed to acknowledge the law. A group of us remained in constant occupation of this building, and the rest of us began to move our living and sleeping materials in. On the afternoon of Thursday, 2nd April, some men who claimed to be owners of the property turned up and they began threatening us with physical harm. This caused a lot of distress, particularly for the women in the group. The men who were claiming to be the owners also began threatening us saying they were going to call immigration officers which we believe was motivated by racism having seen that there was a woman of colour in our group who actually has a legal right to live in the UK and other Europeans who also have a legal right to be in the UK. This caused a lot of distress to the group and threats to people of colour and immigrants like this constitute a hate crime under hate speech laws in England & Wales, namely section 4A of the Criminal Justice and Public Order Act 1994. We made the police aware of what had happened when they arrived, both the physical threats and hate speech. The police officers didnโ€™t take it seriously and brushed it off. One of the people claiming to be the owner had illegally gained entry into the building while we were occupying it. The people claiming to be the owners had also ripped off our Section 6 notice and torn it up before the police came. After the police arrived and in full sight of the officers, the men who were claiming to be owners began to break the door down despite both the owners and the police being made aware of the squatting laws and that what they were doing was illegal. They broke the door down and gained entry and, with the help of the police, they illegally evicted us from the building. The police threatened us with arrest if we did not leave the building and we have video footage of Constable Ryan Taney threatening us and Constable Thorpe telling one of us to stop filming him making threats telling him to step back during the illegal eviction. Constable Taney is seen talking to one of the men claiming to be the owner who they had handcuffed for illegally gaining entry, the police later let him go without arrest and without charging him. It is evident that the police participated in the illegal forced entry of an occupied building and assisted in the illegal eviction of squatters. One of the officers is heard saying he is a police officer so he can gain lawful entry anyway, a statement which goes against squatting laws for non-residential buildings. When a non-residential building thatโ€™s not in use is occupied by squatters, the legal way to regain possession of the building by anyone claiming to be the owner is to take the occupiers to civil court. The people claiming to be the owners need to use the appropriate legal route which requires that they file a claim for possession in court and serve the correct papers to the occupiers of the building with a court date that they can all attend with a judge present to verify all of the information and relevant documents needed. The documents required in the court of law to claim possession of a property include title deeds for the property. The judge then scrutinises and verifies the legitimacy of the documents before granting possession. If the documents are seen to be missing or to not be legitimate, the judge then asks the claimant to provide the correct documentation at a later court date and possession of the building by the claimant is not granted without the correct proof of ownership through these legal documents. This is in place to protect both the people occupying the building from bullying, harassment, abuse or being illegally evicted like happened to us and also to protect property owners, as anyone can claim to be the owner of a building but without appropriate proof, itโ€™s just a claim! The police claimed that they were not aware that we were squatters even though we had told them and even though they had attended the building nearly a week before and verified that we were occupying the building. We managed to record some clips of what happened and in one clip you can see the officer claiming to not have been aware it was a squat even though we told him when they arrived. Despite having a legal warning before the police arrived and before the owners had ripped it off the door and torn it up, there is actually no legal requirement to have such warning up if you are already occupying a building under squatting laws and we did tell the owner and police that we were squatting the building which they all ignored. It seems the lack of the legal warning after it had been torn up by the owners was being used as an excuse to illegally evict us. This is an abuse of power and completely against the law. Furthermore, it is shocking considering that special measures have been in place since 27 March 2020 that stop all evictions, including squatters, due to the major public health crisis we are all in during the current COVID-19 pandemic thatโ€™s disrupted everyoneโ€™s lives and is causing a lot of anxieties and deaths in the UK and all around the world. Evicting people unlawfully should never happen and doing so to put people on the streets at a time like this is inhumane and dangerous. The officers could also have put others as risk along with putting themselves at risk of spreading or catching the virus. We will pursue this injustice for as long as we can and as far as we can to ensure that such an abuse of power and unlawful behaviour from the police never happens to anyone else again! We would also like to highlight that police willingly ignoring public health measures in these unprecedented times of crisis is never OK no matter who you are along with not taking physical threats and hate speech seriously. We are very disappointed with the behaviour of the police officers and we hope you can put in place serious disciplinary measures to ensure they and the rest of the police force do not abuse their power and that they respect the law, which includes squatting laws, to protect the homeless and other marginalised people.โ€
I think that one of the aspects I enjoy most about the profession of social work is that of conflict resolution. We, as human beings, waste so much time and energy feeling angry and resentful. I gently validate peopleโ€™s feelings of anger and then challenge them to consider how it serves them to stay angry. What are the needs that are being met by staying angry? Choosing to forgive another person or even oneself does not mean that the offending behavior was okay, but rather, that the person is moving on. Some of my most powerful lessons have come during painful times. They are the ones I do not forget. Couple this with the belief system that people are doing the best they can with what they know at any given point in time can help one to adopt an attitude of forgiveness. If instead of โ€œconflictโ€ resolution, we elect to use the word โ€œclarification,โ€ this moves parties away from a win-lose, right-wrong, or good-bad stance. Instead, it becomes all about โ€œthe fitโ€ as well as creating effective and efficient communication. People can be too quick to draw conclusions, and it is often prudent to seek additional information for clarification. Regarding the notion of compromising, I stress the importance of compromising where one can remain true to oneself, and at other times agreeing to disagree. It then becomes not about right or wrong, but rather, what is right or wrong for the individual. I like using elements of CBT, DBT, and Mindfulness. I essentially use one or all of these concepts with each of my clients. I especially like that part of DBT that effectively diffuses anger. I explain to my clients that in order to utilize this approach, it requires in the moment: to give up the need to be right to let go of the need to have the last word to let go of the belief that life should somehow be fair In the moment, the only goal is to diffuse the situation. If the individual is important enough, one might ask to revisit what just happened as soon as everyone is calm. I have even taught children to use this approach with a parent who has anger management problems. And finally, I love this opening phrase: โ€œHelp me to understandโ€ฆ.โ€ This can be attached to the beginning of any โ€œwhy-questionโ€ and lead to far less defensiveness. Oprah said, โ€œPeople show you who they areโ€“believe themโ€ (YouTube.) And yet, we set out to change them because that is what we do. Perspectives Therapy Services is a multi-site mental and relationship health practice with clinic locations in Brighton, Lansing, Highland and Fenton, Michigan. Our clinical teams include experienced, compassionate and creative therapists with backgrounds in psychology, marriage and family therapy, professional counseling, and social work. Additionally, we offer psychiatric care in the form of evaluations and medication management. Our practice prides itself on providing extraordinary care. We offer a customized matching process to prospective clients whereby an intake specialist carefully assesses which of our providers would be the very best fit for the incoming client. We treat a wide range of concerns that impact a person's mental health including depression, anxiety, relationship problems, grief, low self-worth, life transitions, and childhood and adolescent difficulties.
Occasionally we receive tips, photos and transcripts from people who want to share something they feel is relevant. We received this transcript of Huma & Hillary supposedly recorded from her campaign plane shortly after receiving notice that the FBI had reopened the criminal investigation of her illegal home server. We donโ€™t know if itโ€™s real or not, but we offer it up here as a โ€œmake up your own mindโ€ piece. Extreme profanity is laced throughout the transcript and has been sanitized when possible with ***. Nick: โ€œโ€ฆand thatโ€™s what was sent to the hill.โ€ Hillary: โ€œNick just get the f**k out of here, nowโ€ (Pause, paper shuffling & unidentifiable noises) Huma: โ€œThis isnโ€™t the place.โ€ Hillary: โ€œJust shut up. You think I didnโ€™t know? You and that k*df**ker sold secrets behind my back. Didnโ€™t I pay you enough? You think youโ€™re indispensable after all these years? Youโ€™ve screwed everything. Everything! That f**ker Comey will come after all of us. How could you be so f***ing stupid! I thought I taught you better! Huma: โ€œIt should have been erased, real-โ€ Hillary: โ€œHow much did you sell and for how much? And who the h*ll have you two been f***ing with behind my back?โ€ Huma: โ€œIt was Anthony. He forced me. It wasnโ€™t working and you knowโ€ฆโ€ Hillary: โ€œHow f***ing much?โ€ Huma: โ€œ50, maybe a little more. It was most of the same players.โ€ Hillary: โ€œYou stupid, selfish b***h, do you realize what youโ€™ve done? Did you put it in the same ones?โ€ Huma: โ€œNo, none of it is in ones you use. We can make this go away. He wonโ€™t let you down. Weโ€™ll never be charged and it will be forgotten.โ€ Hillary: โ€œYou f***ing moron, the election is all that matters! ALL! Even all these media s***eaters wonโ€™t be able to cover it all up if Comey reports it. What is on that f***ing laptop? What!โ€ Huma: โ€œIt was just routine stuff, mostly-โ€ Hillary: โ€œMostly? No one pays 50 for s**t! What was it?โ€ Huma: โ€œMilit-โ€ Hillary: โ€œOh f**k, oh bull f**k! What were you thinking? You stupid k***f***er!โ€ Huma: โ€œIโ€™m sorry, Iโ€™m so sorry, we can say the Russians planted it all! A big plot.โ€ (Sound of a slap and unidentifiable noises then sobbing. Long pause) Hillary: โ€œGet your s**t together. I want the quick response team to prepare different statements and have them for me in 30 minutes. If I lose because of thisโ€ฆโ€ Huma: โ€œ9/11 did that not me. I told you not to go. Wikileaks too. This will be nothingโ€ Hillary: โ€œF**k you. Now youโ€™re just (inaudible) you could never understand. Keep those d***f**ks back there in the dark, kill the wi-fi until we land. And keep out of my sight โ€“ I need to think.โ€ (unidentifiable noises) Update 10/30/2016 Weโ€™ve received a lot of feedback from people who all seem to agree this is most likely a hoax. //platform.twitter.com/widgets.js However, it has just been reported that 650,000 emails were on the shared Weiner/Abedin laptop and while we agree this transcript seems a bit too โ€œpatโ€ and is likely a hoax, it does raise the very real question about why on earth would they keep such an enormous number of government documents on their laptop? //platform.twitter.com/widgets.js When you factor in Hillary going to Geneva to protect a Swiss bank to protect the identities of account holders or the fact that the Clinton logs show the Clinton Foundation had a *weekly* visit from a Swiss banker while she was Secretary of State, all this is, in political-speak ,โ€quite troublingโ€ to say the least. It will be interesting to see where this investigation leads.
Colleen Barry, MSN, April 20, 2015 Rescue crews searched Monday for survivors and bodies from what could be the Mediterraneanโ€™s deadliest migrant tragedy ever as hundreds more migrants took to the sea undeterred and EU foreign ministers gathered for an emergency meeting to address the crisis. If reports of at least 700 and as many as 900 dead are confirmed, the weekend shipwreck near the Libyan coast would bring to well over 1,000 the number of migrants who died or went missing during the perilous Mediterranean crossing in the last week. More than 400 are feared dead in another sinking. More than 10,000 others were rescued. โ€œThis tragedy didnโ€™t have to happen,โ€ Sarah Tyler, a spokeswoman for Save the Children, said of Sundayโ€™s incident. โ€œThat is almost as many as died in the Titanic, and 31 times the number who died when the Costa Concordia sank.โ€ Libya is a transit point for migrants fleeing conflict, repression and poverty in countries such as Eritrea, Niger, Syria, Iraq and Somalia, with increased instability there and improving weather prompting more people to attempt the dangerous crossing. One survivor of the weekend sinking, identified as a 32-year-old Bangladeshi, has put the number of people on board the smugglersโ€™ boat at as many as 950. Authorities previously had quoted him as saying 700 migrants were on board. {snip} The survivor said some 300 migrants were locked in a hold by the smugglers, and would have been trapped inside when the boat sank, according to Prosecutor Giovanni Salvi, who is conducting the investigation. {snip} Italian coast guard Capt. Gian Luigi Bove told reporters in Malta his vessel was about 80 kilometers (50 miles) away from the latest shipwreck when the distress call came in early Sunday. Bove said the Italian vessel arrived at the shipwreck at around 2 a.m. Sunday and found two survivors along with bodies floating in the sea. The survivors were taken on board. He said there was no sign of the smugglerโ€™s boat, an indication that it may have already sunk. Bove said the survivors were from sub-Saharan Africa and language issues were impeding the investigation. Italian Premier Matteo Renzi told private Italian radio RTL he would ask his EU counterparts on Monday to confront instability in Libya more decisively than in the past, but he ruled out ground troops. {snip} Mired in economic crisis and a facing a surging anti-foreigner electorate in many nations, there is little appetite across European governments to take in more poor migrants, however desperate their plight. Czech Foreign Minister Lubomir Zaoralek said sending more ships to rescue migrants could actually make the problem worse. โ€œIf we make the work of traffickers easier and accept refugees that have gone overboard, this will make it an even better business for them,โ€ he said on Czech television. โ€œWe need to find a way to prevent people from setting out on such ships.โ€
Q: add backslash before specific character we have file with many "%" characters in the file we want to add before every "%" the backslash as \% example before %TY %Tb %Td %TH:%TM %P after \%TY \%Tb \%Td \%TH:\%TM \%P how to do it with sed ? A: Pretty straightforward $ echo '%TY %Tb %Td %TH:%TM %P' | sed 's/%/\\%/g' \%TY \%Tb \%Td \%TH:\%TM \%P but you can accomplish the same with bash parameter substitution $ str='%TY %Tb %Td %TH:%TM %P'; backslashed=${str//%/\\%}; echo "$backslashed" \%TY \%Tb \%Td \%TH:\%TM \%P
Schneiderlin has extensive Premier League experience and he could make an instant impact for Moyes. The Everton midfielder is likely to benefit from playing alongside Noble and Kouyate as well. He has struggled to form a partnership with Gueye so far and a move away from Everton could help him kick-start his career. The Hammers are looking to secure a top ten finish this season and Schneiderlin would certainly help them. The former Saints midfielder will add some much-needed steel to their midfield and improve them defensively. Furthermore, his arrival will allow the creative players to operate with more freedom. It will be interesting to see whether the Toffees sanction a sale at this stage of the season. Allardyce will struggle to find a replacement with just 2 days left in the transfer window and therefore they must wait until summer.
Faculty Spotlight - Dr. A. Douglas Eury Dr. A. Douglas Eury was named dean of the School of Education at Gardner-Webb University in January 2011. Dr. Eury holds his bachelorโ€™s degree from Appalachian State University, his masterโ€™s degree in Education from the University of North Carolina at Charlotte, and his Education Specialist and Doctorate in Education degrees from Appalachian State. Before becoming dean, Dr. Eury served 11 years as a professor of Education at Gardner-Webb, before which he worked for years as an educator and administrator in North Carolina public schools. In addition to the duties of the dean, Dr. Eury serves as director of the Center for Innovative Leadership Development and Director of Doctoral Studies in Educational Administration, as well as Curriculum & Instruction. He, along with Dr. Jane King & John D. Balls, has co-authored Rethink, Rebuild, Rebound: A Framework for Shared Responsibility and Accountability in Education, along with several journal articles.
Q: How to stop my search function from checking cases (Uppercase/lowercase) - Javascript/ReactJS I am working on a search function in ReactJS. My search function is working fine , but it is checking cases(Uppercase/lowercase). This is my Demo Fiddle. You can check the search functionality. I want to get rid of the case checking(Uppercase/lowercase). How should I change the code in the easiest manner? This is my Search function getMatchedList(searchText) { console.log('inside getMatchedList'); if (searchText === ''){ this.setState({ data: this.state.dataDefault }); } if (!TypeChecker.isEmpty(searchText)) {//Typechecker is a dependency console.log('inside if block'); const res = this.state.dataDefault.filter(item => { return item.firstName.includes(searchText) || item.lastName.includes(searchText); }); console.log('res' + JSON.stringify(res)); this.setState({ data: res }); } } If I remove Typechecker dependency, how should I change the code? I basically want my search function to be case case insensitive A: You can use toLowerCase if (!TypeChecker.isEmpty(searchText)) { console.log("inside if block"); const res = this.state.dataDefault.filter(item => { if ( item.firstName.toLowerCase().includes(searchText.toLowerCase()) || item.lastName.toLowerCase().includes(searchText.toLowerCase()) ) return item; }); console.log(res); this.setState({ data: res }); } https://codesandbox.io/s/5yj23zmp34
Sydney Missionary and Bible College Sydney Missionary and Bible College (SMBC) is an independent, evangelical interdenominational Bible college in Sydney, New South Wales, Australia. The college was founded in 1916 by C. Benson Barnett. Its goal is to train people for ministry in Australia and abroad. There are two campuses, one in Croydon and another in Croydon Park (opened in 2010). SMBC opened several new student housing units on its main campus in 2009. SMBC is Bible-centred: the academic curriculum is dominated by the Bible and a key focus of the college is to train men and women for gospel ministry in Australia and overseas in a missionary context. David Cook, a former principal of SMBC, retired at the end of 2011. Stuart Coulton was appointed as the principal of SMBC in 2012. The second campus is named โ€œRoberts-Daleโ€ Campus after two martyred graduates, Fred Roberts (Amazon region of Brazil, 1935) and Stan Dale (West Papua, 1968). Graduates of SMBC serve in cross cultural gospel ministry, both overseas and throughout Australia, or in Christian pastoral church ministry in various denominations, as Scripture teachers, in tertiary student ministry and educational or hospital chaplaincy, or in their local churches or chosen trades or professions. SMBC hosts various public talks to give a biblical perspective on current issues such as social media, disability, atheism and miracles. History Sydney Missionary and Bible College is the oldest interdenominational Bible college in Australia. It was established in April 1916 by C Benson Barnett, a returned missionary, with a vision for training men and women for Christian service in Australia and overseas. The College began in a leased building at 43 Badminton Road Croydon. Students from Adelaide, Victoria, Queensland, New South Wales and New Zealand were in the early student intake. Barnett wrote in 1916, "not only are we a Bible College, but we are a Missionary College ... we are taught by Christ, that we are to pray to the Lord of the Harvest that He will send forth labourers into His harvest field". In 1926 the College purchased the leased building it was occupying for ยฃ2,500. Over the years as the College has grown and expanded, additional teaching and residential property has been purchased on Badminton Road. In 2007 a site was purchased in Croydon Park, a few minutes drive from the Croydon campus, to provide new premises to cope with the increasing enrolments. In 2010 this site was officially opened as the Roberts-Dale campus. In 2016 a centennial history of SMBC was written by the Rev Anthony Brammall, who is the Academic Vice-Principal at the college and a lecturer in New Testament studies. Accredited Courses Sydney Missionary and Bible College (SMBC) is accredited by the Australian Department of Education, Employment and Workplace Relations (DEEWR) as an approved teaching institution of the Australian College of Theology (ACT). SMBC Press SMBC has published a number of books as well as other resources. References External links Sydney Missionary and Bible College Category:Seminaries and theological colleges in New South Wales Category:Education in Sydney Category:Bible colleges Category:Educational institutions established in 1916 Category:Australian College of Theology
<?php require_once('include/CRMSmarty.php'); global $mod_strings; global $app_strings; global $app_list_strings; global $current_user; //Display the mail send status $smarty = new CRMSmarty(); if($_REQUEST['mail_error'] != '') { $error_msg = strip_tags($_REQUEST['mail_error']); $smarty->assign("ERROR_MSG",'<b><font color="red">'.$mod_strings['Test_Mail_status'].' : '.$error_msg.'</font></b>'); } global $adb; global $theme; $theme_path="themes/".$theme."/"; $image_path=$theme_path."images/"; $sql="select * from ec_systems where server_type = 'email' and smownerid='".$current_user->id."'"; $result = $adb->query($sql); $mail_server = $adb->query_result($result,0,'server'); $mail_server_port = $adb->query_result($result,0,'server_port'); $mail_server_username = $adb->query_result($result,0,'server_username'); $mail_server_password = $adb->query_result($result,0,'server_password'); $smtp_auth = $adb->query_result($result,0,'smtp_auth'); $from_name = $adb->query_result($result,0,'from_name'); $from_email = $adb->query_result($result,0,'from_email'); $interval = $adb->query_result($result,0,'interval'); if (isset($mail_server)) $smarty->assign("MAILSERVER",$mail_server); if (isset($mail_server_port)) $smarty->assign("MAILSERVER_PORT",$mail_server_port); else $smarty->assign("MAILSERVER_PORT","25"); if (isset($mail_server_username)) $smarty->assign("USERNAME",$mail_server_username); if (isset($mail_server_password)) $smarty->assign("PASSWORD",$mail_server_password); if (isset($smtp_auth)) { if($smtp_auth == 'true') $smarty->assign("SMTP_AUTH",'checked'); else $smarty->assign("SMTP_AUTH",''); } if(isset($from_name)){ $smarty->assign("FROMNAME",$from_name); } if(isset($from_email)){ $smarty->assign("FROMEMAIL",$from_email); } if(isset($interval)){ $smarty->assign("INTERVAL",$interval); } $intervaloptions = '<select name="interval">'; for($i=1;$i<21;$i++){ if($i == $interval){ $intervaloptions .= '<option value="'.$i.'" selected="selected">'.$i.'</option>'; }else{ $intervaloptions .= '<option value="'.$i.'">'.$i.'</option>'; } } $intervaloptions .= '</select>'; $smarty->assign("intervaloptions",$intervaloptions); if(isset($_REQUEST['emailconfig_mode']) && $_REQUEST['emailconfig_mode'] != '') $smarty->assign("EMAILCONFIG_MODE",$_REQUEST['emailconfig_mode']); else $smarty->assign("EMAILCONFIG_MODE",'view'); $smarty->assign("MOD", return_module_language($current_language,'Settings')); $smarty->assign("IMAGE_PATH",$image_path); $smarty->assign("APP", $app_strings); $smarty->assign("CMOD", $mod_strings); $relsetmode = "view"; if(isset($_REQUEST['relsetmode']) && $_REQUEST['relsetmode'] != ''){ $relsetmode = $_REQUEST['relsetmode']; } $smarty->assign("RELSETMODE", $relsetmode); //$smarty->display("Relsettings/EmailConfig.tpl"); ?>
By James White | Posted 2 Nov 2016 The cast for Alfonso Gomez-Rejon's period legal drama The Current War is growing quickly. With Katherine Waterston just added, now word arrives that Tom Holland will also have a role in the new movie. Written by Michael Mitnik, the film will follow the real-life public clash between Thomas Edison (Benedict Cumberbatch) and George Westinghouse (Michael Shannon) as to who was going to decide the future of the electricity industry in the 1880s. Edison favoured direct current, while Westinghouse and several other companies preferred to push alternating current. Caught between them is inventor Nikola Tesla (Nicholas Hoult), with Waterston on as Westinghouse's wife. As for Holland, he'll be playing Edison's right hand man, Samuel Insull. The film should be shooting next month, looking to get out ahead of Imitation Game duo Morten Tyldum and Graham Moore, who are adapting Moore's book about the fight, The Last Days Of Night. They have Eddie Redmayne aboard to play lawyer Paul Cravath (who defended Westinghouse from a lawsuit by Edison), and have a February start pencilled in. Holland returns to the role of Peter Parker in Spider-Man: Homecoming, which is out here on 7 July.
Been working on a texture for a little grate model, first time I've used gimp to make a proper texture and I feel it's one of best the I've made (though that's not particularly difficult considering the general quality of my textures). Been working on a texture for a little grate model, first time I've used gimp to make a proper texture and I feel it's one of best the I've made (though that's not particularly difficult considering the general quality of my textures). I suppose I'll ask this here: is there a difference in EP2 textures and Portal 2 textures? I tried using some EP2 textures in Portal 2 but they wouldn't work. If there is a significant difference in the vmt's how hard would it be to switch it over? The reason I ask is because I would like to use the P2 engine to make maps as the lighting looks really nice. I suppose I'll ask this here: is there a difference in EP2 textures and Portal 2 textures? I tried using some EP2 textures in Portal 2 but they wouldn't work. If there is a significant difference in the vmt's how hard would it be to switch it over? The reason I ask is because I would like to use the P2 engine to make maps as the lighting looks really nice. Portal 2 uses vtf format 7.5. episode 2 uses 7.0-7.4 I believe. there is a batch file somewhere you can use or you can hex edit it. Posting here because it's (barely) relevant and because I don't want to waste space (by creating a thread) as well as the fact that my development team is severely held back due to lack of custom textures. In short, we're looking for a texture artist to help with anything from HUD/GUI, 512x512 (or most power of two) map textures, decals and/or skins. The effort in which a texture artist is required is called Aura. It is a gamemode for the Half-Life 2 modification, Garry's Mod, and it's a medieval/fantasy MORPG (Multiplayer Online Role Playing Game).
In The Wake Of The Teโ€™o Scandal, Redskins Players Duped By @RedRidnH00d Looks like Manti Teโ€™o isnโ€™t the only football player whoโ€™s been femme-fooled online. According to the NFL, at least four players on the Redskins were duped during the 2012 season by a hacker posing as a woman sending them messages online and hoping to meet them in person. Phillip Daniels, the teamโ€™s director of player development, acted fast, posting a memo on the Redskinsโ€™ locker room wall in the middle of December with a stern warning: โ€œStay away from @RedRidnH00d. Avoid her on Twitter. Avoid her on Instagram. Do not converse with this person on any social media platform. She is not who she claims to be.โ€ Apparently, a woman known by the pseudonym Sidney Ackerman was using pictures of an Internet adult entertainer, C.J. Miles, to establish dialogues with pro athletes. According to NFL sources, the conversations occurred mostly through Twitter direct messages. But in some instances, the fake โ€œAckermanโ€ also sent separate photos of Miles to playersโ€™ cell phones. On multiple occasions, several Skins players attempted to arrange meetings with her, but none succeeded. When those failed attempts led to suspicion, Daniels then received some independent info about the possibility that โ€œAckermanโ€ was indeed a fake. Although the account โ€“ and its corresponding Facebook profile โ€“ have since been deleted, Ackerman had collected more than 17,000 Twitter followers, which understandably created a sense of legitimacy in the minds of players. Luckily, the Redskinsโ€™ interactions with @RedRidnH00d never broached Teโ€™o territory. Further proof that social media hoaxes are growing in frequency and scope: Last Friday, NFL.com discovered another unverified Twitter account, @RideAndDieChick, with a photo of Miles as its Twitter avatar. As of Saturday afternoon, @RideAndDieChick was being followed by 22 verified NFL players and six verified NBA players. The account is no longer active. But clearly, these stories stand to be taken seriously. And thereโ€™s buzz about incorporating social media training into playersโ€™ orientation regimens. What are your thoughts on social hoaxes like what went down with Teโ€™oโ€™s fake girlfriend and the Redskins? Is the onus on the players to play it smarter and safer, or are there protective measures that social media platforms and/or the NFL should be taking?
Poverty in America: Social Changes & Global Crises Looking at poverty in the American continent is essential to better understand how the former colonies have evolved into radically different countries. Country by country, analyzing the social and economic paths taken, we can see the effects of discrimination, land ownership, wealth distribution & corruption. If you want to read more on the origins of the huge wealth differences between North America and South America, we recommend the article on the causes of poverty. USEFUL LINKS Poverty in America: The United States โ€Homelessness in the United States Historical overview From the 1950s to the early 1970s, poverty in America (i.e. USA) fell continuously as the economy was booming and the average income was rising. This led to a total elimination of absolute poverty in the country. But at the turn of the 21st century, over 1 in 10 American was poor, 20% of whom where African Americans. The reason for this is that when the 1970s oil crisis hit the USA, poverty climbed up again, only to go down again in the 1990s with the new economic boom. Market liberalization in this mature economy brought more efficiency, while other American developing economies were suffering of its impact. The rise of urban poverty in America The 1970s and 1980s decades are also times when poverty in America became concentrated in urban areas, in particular the old, charming industrial centers. At the same time the population also started to change with the arrival of more and more immigrants from Latin America, reshaping the face of poverty. Urban poverty then tripled in ten years and kept on its expansion onto the 1980s decade. Itโ€™s now one of the main features of poverty in America (country and continent). As of 1980, nearly 70% of the urban poor were black, 20% were Latinos, and 10% white. A new face of poverty: immigration and single moms Another thing that changed: the traditional family structure. Like it or not, things have changed and you wonโ€™t force an unhappy woman to stay married to her husband anymore. Although, some governments actually tried to by restricting welfare support to families headed by a married couple (talk about government intervention!). Well, that wasnโ€™t the official goal but it was clearly the effect. And millions of single mothers were dropped from welfare support. Demographic changes (immigration) as well as changes in the family structure have had a massive impact on poverty in the US for a couple of decades until things finally stabilized. Now their effect is minor, even though immigrants and single mothers (and their kids) still represent much of the poor. But in a stable way, i.e. the numbers donโ€™t change too much. Yey. Oh no, thatโ€™s without counting on the effects of the latest economic crisis! Is poverty in the US completely fake? Itโ€™s the old โ€œoh when I was a kid we didnโ€™t have TV nor a microwave ovenโ€. But judging todayโ€™s poor on 1970s criteria is just completely absurd because we donโ€™t live like we used to in 1970s. Prices are different, education costs hell of a lot more, living standards and norms are totally different. This means that they simply donโ€™t know the definition of poverty. Speaking of absolute poverty in developed countries is just ridiculous, relative poverty is what matters, in other terms, the capability to fully participate in the life of the nation (having a job, having the Internet and whatnot). Of course the poor today live better than the average in the 1950s, but back then they also used to die of diseases and cancers you can now cure. And today you also need to pay for that somehow. US poverty line and US welfare In comparison to other rich countries, the US ranks usually pretty high in the poverty charts (not a good thing). Thatโ€™s mostly because of the children and the elderly, who fall disproportionately below the poverty line compared to other groups. Time to get a job, kids. The problem is that in most advanced economies child poverty is usually less than 10% whereas in the US itโ€™s been around 20%. It even rose to 25% recently and the country currently counts 1 in 4 children on food stamps. Good days are gone, baby, gone. Another characteristic of American poverty is the living condition of impoverished single parents โ€“ usually moms โ€“ who even while they work more than their counterparts in Europe or Australia receive less transfer benefit than in other countries. Since their low paid jobs arenโ€™t enough to maintain the household above the poverty lineโ€ฆ well they just fall below that line and make do with poorness. But, this also has some positive impacts on the American economy that lead to an interesting debate. Do you want less inequality or a more competitive economy? Is a middle ground possible? Are there other forms of riches non-measurable in dollars? As far as poverty in America goes, you wonโ€™t find all the answers right now (probably not before a decade), but here you can start finding out more on US welfare and the national poverty level. Read more about the national poverty level and US welfare. Meanwhile in Canada โ€A beggar in Canada Income inequalities Looking at the recent history of poverty in Canada you can see that income inequalities have increased from 1980 to 2000, especially during the recession of the early 1990s while a huge surge in immigration brought wages down at a bad time. There were big disparities between, say, retirees who were enjoying a pension greater than ever before and immigrants and industrial workers who generated for the first time concentrated urban poverty in the country. The high level of income of older Canadians hid as well just how poor young workers were by bringing up the average income nationwide. Thus in the 1990s the ones witnessing the most inequalities were the young and the single, who were increasingly unemployedโ€ฆ until the good times came back. Immigrants were facing very low income too, but proportionately speaking they werenโ€™t yet that many and most of the urban poor was still white. Canadian poverty and major transformations The reason that inequalities are important in Canada is that they underline the transformations of the economy and the society. These changes put a lot of pressure on how to best redistribute public resources from health care to housing support in the population. Urban poverty โ€“ when over 40% of residents in an area live below the poverty line โ€“ indeed appeared in the 1980s in an extensive way throughout the country and has had severe consequences on welfare provision, access to education and the rise of unemployment. Ethnic groups were particularly affected by the phenomenon but since the majority of the urban poor was white nonetheless it hinted at the fact that the problem was structural (rather than discriminative): the job market and the economy. Poverty in America: the Latin side of things โ€Poverty in Latin America Absolute or relative? Poverty in Latin America has always been somewhere in-between: neither completely absolute poverty nor completely relative poverty. What does this mean? On this side of the continent, poverty in America meant not having access to many basic services such as primary health care, education, sometimes also water or electricity, but food for instance hasnโ€™t been that big of an issue. In 1980 only 15% of the population was concerned by malnutrition while almost 50% of Africa and South Asia were. Malnutrition in Latin America However, here again weโ€™re being tricked by the numbers and the averages. Many Latin American countries have had a mostly urban population (over 50%) where food supply isnโ€™t usually a problem. But as of 1980 there were still quite a few that were mostly agrarian societies, where food poverty tends to be higher. Therefore, while the average says that โ€œonlyโ€ 15% of people lack food in Latin America, this masks the fact that in some countries only 1% of the population suffered from malnutrition while in other neighboring countries it went as high as 50%. Even then, it also depended on the definition of urban area, which is prone to pretty incredible variations across the globe. Whether you include slums and Brazil-style favelas or not makes quite a difference. Slums residents often lack food as well, and on top of that theyโ€™re totally cut off from the rest of the population and thus lack access to the most elementary services that provide decent living conditions. Rural poverty in Latin America Yet, in the end absolute poverty in Latin America is mostly rural. In such areas, a radical difference with the development of Northern American countries is that landowners have traditionally made up only 5% of the population (against 70 to 80% in the US and Canada a century ago) and this has dramatically shaped the way Latin American countries have developed with embedded inequalities in their societies. The amount of land that farmers possess nowadays still affects how much food they get and the extent of poverty in America, south of the Rio Grande. It determines whether or not theyโ€™ll live below the poverty line. Also, read about poverty in Haiti, and what makes it an exception in Latin America Poverty in Mexico: an American classic? โ€Poverty in America: the main cause of immigration to the US & Canada Very rich and yet very poor Over 40 million people live in poverty in Mexico (thatโ€™s about 40% of the population), among whom some 15 million live in extreme poverty, as defined by the World Bank (less than $1.25/day). Strangely, Mexico nonetheless is a pretty rich country โ€“ 13th economy in the world โ€“ which shows that high GDP doesnโ€™t necessarily mean less poverty, for example if thereโ€™s no redistribution whatsoever of the countryโ€™s riches. Whatโ€™s more, having such a high poverty rate puts a huge strain on the economy and hurts the countryโ€™s competitiveness (natural correlation between poverty and the economy). This much is one typical aspect of poverty in America where many rich countries have maintained inequalities inherited from the colonial era. As only a few rip the benefits of Mexicoโ€™s opportunities and resources (and donโ€™t reinvest their capital in the country), the rest of the economy shrinks, unemployment grows and wages go down. The fault goes directly to Mexicoโ€™s rich for making poverty worse. Thatโ€™s the old story of poverty in America on the Latin side. Poverty and economic crises Now in fact Mexico wasnโ€™t doing too bad until the 1980s. Before that inflation was under control, productivity was rising year by year and GDP per capita used to grow by 3-4% every year. But then in the early 1980s came the economic crisis that hit Mexico and Brazil so hard it devastated their economy. Up to now, all Mexican policies do is still try to fix the consequences of what happened decades ago. At that time the peso suffered a major devaluation, inflation skyrocketed (between 100% to 150%, while you can say that 2-5% is okay/normal), wages plummeted by as much as 40% and public debt rose to over 100% of GDP. Facing that much poverty, millions of Mexicans obviously started migrating to the US and Canada at the same time, as mentioned before. Mexico is a perfect example of how poverty, economic cycles and macro-economy can be intimately connected. Cuba, the black sheep of American poverty A strange case After years of struggle Cuba became in 1959 (officially 1961) an independent socialist country, allegedly fulfilling the dreams of its revolutionary leaders. Now Cuba is a strange spot in the story of poverty in America because in many ways it canโ€™t be considered poor. In many ways it fairs better than many advanced economies in terms of equality and quality of services it offers its population. Yet, both the population and the state are becoming penniless which puts the whole system at risk. As the country is running broke and was hit hard by the 2008 economic crisis, there is a dire need for profound reforms and changes in the economy. Historical review of Cuba and its poverty levels For the 30 years following its independence Cubaโ€™s GDP rose by 4% per year, even though 20 to 30% of its GDP was in fact made of aid coming from the USSR. But in the end Cuba boasts today an HDI (Human Development Index) of 0.8, one of the highest in the developing world. The reason for such high HDI is that the index gives great importance to access to education and health care, and Cuba happens to offer universal access to both. So, a healthy and very well educated workforceโ€ฆ but there are no jobs to up to the local level of education. Sounds like the perfect definition of waste of talent and โ€œhuman capitalโ€. The challenge for Cuba, as youโ€™ll have understood, is to adapt to this post-soviet world of ours and handle the transition and reforms that have recently caused waves of inequalities. But Cuba also holds a few keys regarding how poverty in America can best cope with issues related to globalization and liberalization, which have brought massive inequalities to the whole continent. These content links are provided by Content.ad. Both Content.ad and the web site upon which the links are displayed may receive compensation when readers click on these links. Some of the content you are redirected to may be sponsored content. View our privacy policy here. Family-Friendly Content Website owners select the type of content that appears in our units. However, if you would like to ensure that Content.ad always displays family-friendly content on this device, regardless of what site you are on, check the option below. Learn More Only recommend family-friendly content To learn how you can use Content.ad to drive visitors to your content or add this service to your site, please contact us at info@content.ad. 10Google +0 NEW COMMENTS Hey there โค ! Looks like youโ€™re enjoying the discussion. Leave a comment below or sign up to our newsletter to receive our latest articles.
It is particularly preferred to employ Staphylococcal genes and gene products as targets for the development of antibiotics. The Staphylococci make up a medically important genera of microbes. They are known to produce two types of disease, invasive and toxigenic. Invasive infections are characterized generally by abscess formation effecting both skin surfaces and deep tissues. S. aureus is the second leading cause of bacteremia in cancer patients. Osteomyelitis, septic arthritis, septic thrombophlebitis and acute bacterial endocarditis are also relatively common. There are at least three clinical conditions resulting from the toxigenic properties of Staphylococci. The manifestation of these diseases result from the actions of exotoxins as opposed to tissue invasion and bacteremia. These conditions include: Staphylococcal food poisoning, scalded skin syndrome and toxic shock syndrome. SpoIIIE is a membrane bound protein involved in chromosome partitioning during sporulation and vegetative replication in a wide variety of bacteria. The SpoIIIE gene was initially characterised in Bacillus subtilis (Butler P. D. and Mandelstam J. (1987) Journal of General Microbiology 133:2359-2370). SpoIIIE protein has an ATP binding site and is membrane-bound, and appears to form a pore in the nascent spore septum, through which the prespore chromosome is driven in a conjugation-like mechanism (Wu L. J., Lewis P. J., Allmansberger R., Hauser P. M. and Errington J. (1995) Genes and Development 9:1316-1326). spoIIIE mutants cannot sporulate as they are unable to partition the prespore chromosome into the polar prespore compartment. Instead a specific chromosomal segment comprising approximately 30% of the chromosome enters the prespore, while the rest remains in the mother cell, trapped by the septum (Wu L. J. and Errington J. (1994) Science 264:572-575). In wild-type cells SpoIIIE is membrane-bound, and appears to form a pore in the nascent spore septum, through which the prespore chromosome is driven in a conjugation-like mechanism (Wu L. J., Lewis P. J., Allmansberger R., Hauser P. M. and Errington J. (1995) Genes and Development 9:1316-1326). It has been shown that SpoIIIE is also required for correct partitioning of the B.subtilis chromosome during vegetative cell division. spoIIIE- Mutants in which replication has been artificially delayed are unable to separate the replicated chromosomes before septum formation, resulting in a trapped nucleoid similar to that formed at the start of sporulation (Sharpe M. E. and Errington J. (1995) Proceedings of the Natural Academy of Sciences USA 92:8630-8634). SpoIIIE has been shown to be essential in Escherichia coli (Begg K. J., Dewar, S. J. and Donachie W. D. (1995) Journal of Bacteriology 177:6211-6222). Highly conserved SpoIIIE homologues are found in diverse members of the eubacteria such as Campylobacter jejuni (Miller, S., Pesci E. C. and Pickett C. L. (1994) Gene 146:31-38), Coxiella burnetii (Oswald W. and Thiele D. (1993) Journal of Veterinary Medecine B40:366-370), Eshcerichia coli (Begg 1995 above) and Haemophilus influenzae (Fleischmann, R. D., Adams, M. D., White, O., Clayton, R. A., Kirkness, E. F., Kerlavage, A. R., Bult, C. J., Tomb, J.-F., Dougherty, B. A., Merrick, J. M., McKenney, K., Sutton, G., FitzHugh, W., Fields, C. A., Gocayne, J. D., Scott, J. D., Shirley, R., Liu, L.-I., Glodek, A., Kelley, J. M., Weidman, J. F., Phillips, C. A., Spriggs, T., Hedblom, E., Cotton, M. D., Utterback, T. R., Hanna, M. C., Nguyen, D. T., Saudek, D. M., Brandon, R. C., Fine, L. D., Fritchman, J. L., Fuhrmann, J. L., Geoghagen, N. S. M., Gnehm, C. L., McDonald, L. A., Small, K. V., Fraser, C. M., Smith, H. O. and Venter, J. C. (1995) Science 269:496-512). All of these proteins are 36-55% identical at the amino acid level overall. Their N-terminal 200 amino acids are hydrophobic and not conserved, so if the C-terminal 500 or so amino acids are considered alone the level of conservation rises to 42-67% identical amino acids. This high level of identity among diverse eubacteria strongly suggests commonality of function. Inhibitors of SpoIIIE proteins would prevent the bacterium from establishing and maintaining infection of the host by preventing it from correctly partitioning the chromosome in the manner described above and thus arresting cell division and growth, rendering the bacterium susceptible to host defences and leading ultimately to cell death and thereby have utility in anti-bacterial therapy. Clearly, there is a need for factors that may be used to screen compounds for antibiotic activity and which factors may also be used to determine their roles in pathogenesis of infection, dysfunction and disease. There is also a a need for identification and characterization of such factors and their antagonists and agonists which can play a role in preventing, ameliorating or correcting infections, dysfunctions or diseases. The polypeptides of the invention have amino acid sequence homology to a known B. subtilis spoIIIE protein.
Q: Defining \xthinspace: Thin space only if not followed by certain characters Following this 2005 thread from the XeTeX list, Iโ€™ve defined a \spaceddash command and assigned it to the Unicode em-dash character U+2014 โ€œโ€”โ€: \documentclass{minimal} \usepackage[utf8]{inputenc} \DeclareRobustCommand{\spaceddash}% {\unskip\nobreak\thinspace\textemdash\thinspace\ignorespaces} \DeclareUnicodeCharacter{2014}{\spaceddash} \begin{document} meow โ€” meow meowโ€”meow โ€” meow meow โ€”. \end{document} The document this produces looks something like this: meowโ€‰โ€”โ€‰meow meowโ€‰โ€”โ€‰meow โ€”โ€‰meow meowโ€‰โ€”โ€‰. Notice the thin-space between the em-dash on the last line and the period afterwardโ€”Iโ€™d like to get rid of it. Following the example of xspace & xpunctuate, Iโ€™m trying to define a sort of \xthinspace command, one that will insert the \thinspace except if the dash is followed by certain punctuation marks (e.g., period, comma, close-parenthesis, close-quote). How do I go about this? A: Here's a possibility using expl3. The test gobbles spaces and then checks whether the following token appears in the exceptions list. If not, it applies \thinspace. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{newunicodechar,xparse} % The following is equivalent to \DeclareUnicodeCharacter{2014}{...} \newunicodechar{โ€”}{\spaceddash} % write only `\spaceddash` in aux files \NewDocumentCommand{\spaceddash}{% \ifvmode\leavevmode\else\unskip\nobreak\thinspace\fi \textemdash\xthinspace} \ExplSyntaxOn \NewDocumentCommand{\xthinspace}{ } { \xths_main: } \NewDocumentCommand{\addtoxthinspaceexceptions}{m} { \tl_gput_right:Nn \g_xths_exceptions_tl { #1 } } \cs_new_protected:Npn \xths_main: { \bool_set_true:N \l_xths_apply_bool \peek_catcode_ignore_spaces:NF \c_space_token { \xths_check: } } \cs_new_protected:Npn \xths_check: { \tl_map_inline:Nn \g_xths_exceptions_tl { \token_if_eq_charcode:NNT ##1 \l_peek_token {\bool_set_false:N \l_xths_apply_bool \prg_map_break: } } \bool_if:NT \l_xths_apply_bool { \thinspace } } \tl_new:N \g_xths_exceptions_tl \ExplSyntaxOff \addtoxthinspaceexceptions{,.)} \begin{document} meowโ€”meow meowโ€”. meowโ€” . \end{document} Try it with \hspace{1cm} instead of \thinspace if you want to verify it works. A limitation: if the โ€” is followed by a macro that expands to a comma, a period or a closed parenthesis, the test will fail. A: \@ifnextchar (thanks @egreg) is your friend here: \DeclareRobustCommand{\spaceddash}% {\unskip\nobreak\thinspace\textemdash\gobblespaces} \makeatletter\def\gobblespaces{\@ifnextchar.\relax\thinspace}\makeatother \@ifnextchar gobbles any intervening space tokens. It compares the first non-space token found to its first argument ("." here) and executes the first macro if they are the same ("\relax'"); otherwise, it executes the second macro ("\thinspace'"). Obviously, this could be generalized to look for any number of tokens. I'm sure xspace does something similar. A full MWE is: \documentclass{minimal} \usepackage[utf8]{inputenc} \DeclareRobustCommand{\spaceddash}% {\unskip\nobreak\thinspace\textemdash\gobblespaces} \makeatletter\def\gobblespaces{\@ifnextchar.\relax\thinspace}\makeatother \DeclareUnicodeCharacter{2014}{\spaceddash} \begin{document} meow โ€” meow meowโ€”meow โ€” meow meow โ€”. \end{document} And the resulting document looks like:
please snap picture from ENGINE Bay of vehicle for such routing .. if it is how u posted it seems something seriously wrong. bottom goes to water pump inlet, - top one is to complete water circulation when car is hot & thermostat is open for coolant flow. if your mechanic have done some by-pass as in post, how coolant will flow to maintain engine temp? - www.crackwheels.com - A skilled Dictator is much more beneficial to Country......than a Democracy of Ignorant people This is the current situation. Line in Red color is showing the bypass pipe (CNG kit was in between before) First this diagram is wrong. Put the T-valve inside engine not away from it. Second if you are describing it right - the red line - then you radiator is not bypassed its actually your heater core that is bypassed. Its a bullcrap done by mechanics in summer season - when heater is not being used - to avoid any hot air leaking into cold air of AC and effect its cooling. Doing this be ready to get your heater core repaired the next time you put it back in circulation in winter season. Since it would be leaking. First this diagram is wrong. Put the T-valve inside engine not away from it. Second if you are describing it right - the red line - then you radiator is not bypassed its actually your heater core that is bypassed. Its a bullcrap done by mechanics in summer season - when heater is not being used - to avoid any hot air leaking into cold air of AC and effect its cooling. Doing this be ready to get your heater core repaired the next time you put it back in circulation in winter season. Since it would be leaking. Dear, i made a rough diagram only which i could see easily. definitely T-valve is attached with the engine. Heater core is bypassed but indirectly radiator is also bypassed (i mean top and bottom pipes are joined near heater pipes) but i used heater in same situation in last winter. Heater was working fine.
Maternal depression {#sec1-1} =================== Maternal depression is common, and can be experienced during pregnancy (antenatal depression) and after the child is born (postpartum depression). Depression negatively affects the way a person with depression thinks, feels and acts. The symptoms of depression are detected by the use of questionnaires (for example the Edinburgh Postnatal Depression Scale (EPDS)), or diagnosed using a clinical interview such as the DSM or the ICD.^[@ref1]^ Among mothers in high-income countries, antenatal depression affects 7--12%^[@ref2]^ and postpartum depression affects 13--18% within the first year after childbirth.^[@ref3],[@ref4]^ Depression causes personal suffering and weakens a person\'s ability to function in general. A woman with depression in the perinatal period is also faced by the challenges of the antenatal transition into motherhood, and after giving birth, the responsibility of taking care of her infant. Supportive parenting is known to be one of the strongest predictors of good outcomes for children.^[@ref5]^ Longitudinal studies from many countries show that positive, consistent and supportive parenting predicts low levels of child problem behaviour and child abuse, and also predicts enhanced cognitive development.^[@ref6]--[@ref12]^ Conversely, harsh inconsistent parenting predicts a broad range of poor child outcomes.^[@ref6],[@ref13]--[@ref16]^ Several studies have shown that increased levels of depressive symptoms in the parent are associated with less sensitive and harsher parenting behaviours.^[@ref17]--[@ref19]^ Mothers with depression tend to demonstrate a flatter affect and be less sensitive, less responsive and less affectively attuned to their infants\' needs, thus violating the infants\' basic needs for positive interaction.^[@ref20],[@ref21]^ Maternal depression is a major risk for the infant. The first months of life are a highly sensitive period during which the infant is dependent on maternal care, and during which early brain and socioemotional development take place.^[@ref22]^ The negative impact of maternal depression on early child development is well documented and includes a broad range of child outcomes and increased susceptibility to psychopathology.^[@ref20],[@ref23]--[@ref29]^ Maternal depression is frequently considered a unitary construct, but depressive symptoms do not follow a uniform course and there is a great diversity among mothers with depression.^[@ref30]--[@ref32]^ In accordance with cumulative risk theories (the more stressors and risk factors a child is exposed to, the bigger their risk of developing mental illness), the most adverse child outcomes are linked to high-risk populations where the depressive symptoms occur in combination with risk factors such as poverty and comorbid psychopathology.^[@ref32]--[@ref35]^ The relationships between parental depression and such factors as parental competences, parental sensitivity, parent--child relationship and child development are complex and still not fully explained. Parental depression may influence the child through three potential mechanisms: (a) a direct causal relationship through genetic inheritance of risk genes from parent to child, (b) through shared environmental factors during pregnancy that have an impact on both maternal depression and child development (for example poverty), and (c) through the influence of parental depression on parent behaviour, on the quality of the parent--child relationship and on the overall functioning of the family, which lead to poorer outcomes for the child.^[@ref36]^ Existing research {#sec1-2} ================= Based on previous studies, we know that interventions that focus solely on the mother (such as medication or psychotherapy targeting the depressive symptoms) are insufficient to buffer against the potentially negative impact of psychopathology on the child\'s cognitive and psychosocial development, as well as attachment.^[@ref37]--[@ref40]^ Previous reviews of the effects of interventions for parents with depressive symptoms that include outcomes on the parent--child relationship or child development outcomes have focused on a broad range of psychological interventions aimed at treating depression in women with antenatal depression or postpartum depression.^[@ref39]--[@ref44]^ All reviews explicitly state that their results are either tentative or that evidence is insufficient to draw firm conclusions from. Still, five out of six reviews conclude that the interventions show promising results. Three of the reviews conducted meta-analyses.^[@ref41],[@ref42],[@ref44]^ The first review, published in 2011, focused on interventions aimed at enhancing maternal sensitivity, but did not address child developmental outcomes.^[@ref41]^ The second review, published in 2015, focused on psychological treatment of depression in mothers.^[@ref42]^ The study included meta-analyses of both the mother--child relationship and child mental health outcomes, but mixed observational and parent-reported measures in the analyses. The third review, published in 2017 by Letourneau and colleagues, focused on interventions aimed at treating perinatal depression.^[@ref44]^ Their meta-analysis focused only on the effect of two types of interventions: interpersonal psychotherapy and cognitive--behavioural therapy (CBT). Each meta-analysis included two studies only, which is not ideal.^[@ref45]^ We found no previous reviews focusing specifically on the effect of interventions aimed at improving parenting including both the parent--child relationship and child development outcomes. The objective of this review is therefore to systematically review the effects of parenting interventions on the parent--child relationship and child development outcomes when offered to pregnant women or mothers with depressive symptoms who have infants aged 0--12 months. We included randomised controlled trials (RCTs) of interventions that aimed at improving parenting in a broad sense (such as Circle of Security^[@ref46]^ or Minding the Baby^[@ref47]^) and that reported on the parent--child relationship (for example attachment or parent--child relationship) or child development (for example socioemotional or cognitive development) outcomes at post-intervention or follow-up. Method {#sec2} ====== This review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). We did not register a protocol. Search strategy {#sec2-1} --------------- The latest database search was performed in September 2018. Ten international bibliographic databases were searched: Campbell Library, Cochrane Library, CRD (Centre for Reviews and Dissemination), ERIC, PsycINFO, PubMed, Science Citation Index Expanded, Social Care Online, Social Science Citation Index and SocIndex. Operational definitions were determined for each database separately. The search strategy was developed in collaboration between an information specialist and a member of the review team (M.P.), and comprised three separate reviews on parenting interventions.^[@ref48],[@ref49]^ The main search comprised combinations of the following terms: infant\*, neonat\*, parent\*, mother\*, father\*, child\*, relation\*, attach\*, behavi\*, psychotherap\*, therap\*, intervention\*, train\*, interaction, parenting, learning and education (see supplementary File 1 available at <https://doi.org/10.1192/bjo.2019.89>). To identify studies on women with depression or depressive symptoms in the perinatal period, the search also included the term depress\*. The searches included Medical Subject Headings (MeSH), Boolean operators and filters. Publication year was not a restriction. Furthermore, we searched for grey literature, hand searched four journals and snowballed for relevant references. Eligibility criteria and study selection {#sec2-2} ---------------------------------------- All publications were screened based on title and abstract. Publications that could not be excluded were screened based on the full-text version. Each publication was screened independently by two research assistants under close supervision by S.B.R. and M.P. Uncertainties regarding inclusion were discussed with S.B.R. or/and M.P. Screening was performed in Eppi-Reviewer 4. The inclusion and exclusion criteria are presented in [Table 1](#tab01){ref-type="table"}. Table 1Inclusion and exclusion criteriaInclusion criteriaExclusion criteriaPopulationDepression or depressive symptoms in mothers of infants 0--12 months old in Western Organisation for Economic Co-operation and Development countries.Studies including young mothers (mean age \<20 years), parents with severe mental health problems such as schizophrenia, parents with children born preterm, at low birth weight or with congenital diseases, or studies that included mothers without depressive symptoms.InterventionStructured psychosocial parenting intervention consisting of at least three sessions and initiated either antenatal or during the child\'s first year of life with at least half of the sessions delivered postnatally.Interventions not focusing specifically on parenting (for example baby massage, cognitive--behavioural therapy, or breastfeeding interventions), and unstructured interventions (for example home visits not offered in a structured format).Control groupNo restrictions were imposed. All services or comparison interventions provided to the control group were allowed.OutcomeChild development and/or parent--child relationship outcomes.Studies reporting only physical development or health outcomes such as height, weight, duration of breastfeeding and admissions to hospital. Papers with insufficient quantitative outcome data to generate standardised mean differences (Cohen\'s *d*), risk ratios and confidence intervals.DesignRandomised controlled trials (RCT) or quasi-RCTs.Other study designs such as case--control, cohort, cross-sectional and systematic reviews.Publication typeStudies presented in peer-reviewed journals, dissertations, books or scientific reports.Abstracts or conference papers. Studies published in languages others than English, German or the Scandinavian languages (Danish, Swedish and Norwegian). Data extraction and risk of bias assessment {#sec2-3} ------------------------------------------- We developed a data extraction tool for the descriptive coding and extracted information on (a) study design, (b) depression inclusion criteria, (c) sample characteristics, (d) intervention characteristics, (e) setting, (f) outcome measures, and (g) child age at post-intervention and at follow-up. The information was extracted by a research assistant and checked by S.B.R. Primary outcomes were (a) parent--child relationship and (b) child socioemotional development. Secondary outcomes comprised other child development markers, for example cognitive and language development. The numeric coding was conducted independently by two reviewers (S.B.R. and I.S.R.). Disagreements were resolved by discussion, and, if necessary, a third reviewer was consulted. We assessed risk of bias separately for each relevant outcome for all studies based on a risk of bias model developed by Professor Barnaby Reeves and the Cochrane Non-randomised Studies Method Group (Reeves, personal communication, 2019 from Reeves, Deeks, Higgins and Wells, unpublished data, 2011). This extended model follows the same steps as the risk of bias model presented in the Cochrane Handbook, Chapter 8.^[@ref50]^ The risk of bias assessment was conducted by I.S.R. and checked by S.B.R. Any doubts were resolved by consulting a third reviewer. Data analysis {#sec2-4} ------------- We calculated effect sizes for all relevant outcomes where sufficient data was provided. Effect sizes are reported using standardised mean differences (Cohen\'s *d*) with 95% CI for continuous outcomes. Data include post-intervention and follow-up means (or mean differences), raw s.d.s and sample size. For dichotomous outcomes, we used risk ratios (RR) with 95% CI. If a paper provided insufficient information regarding numeric outcome, the corresponding author was contacted. When available, we used data from adjusted analyses to calculate effect sizes. When adjusted mean differences were calculated, we used the unadjusted s.d.s to be able to compare effect sizes calculated from unadjusted and adjusted means. To calculate effect sizes, we used the Practical Meta-Analysis Effect Size Calculator developed by David B. Wilson, George Mason University and provided by the Campbell Collaboration.^[@ref51]^ Meta-analysis was conducted when outcome and time of assessment were comparable. When a single study provided more than one relevant measure, or only subscales of an overall scale for the meta-analysis, the effect sizes of the respective measures were pooled into a joint measure before being entered in the meta-analysis. One study consisted of three separate intervention arms and a shared control group.^[@ref52]^ Since three effect sizes based on the same control group would not be independent, we calculated an effect size based on a mean of the three intervention groups\' means and the mean of the control group. Random-effects inverse-variance-weighted mean effect sizes were applied and 95% CIs were reported. Thus, studies with larger sample sizes were given more weight, all else being equal. Based on the relatively small number of studies and on an assumption of between-study heterogeneity, we used a random-effects model using the profile-likelihood estimator as suggested in Cornell.^[@ref53]^ Variation in standardised mean difference that was attributable to heterogeneity was assessed with the *I*^2^. The estimated variance of the true effect sizes was assessed by the Tau^2^ statistic. The small number of studies in the meta-analysis did not allow for subgroup analyses. Assessment times were divided into post-intervention (at intervention ending), short-term (less than 12 months after intervention ending) and long-term (12 months or more) follow-up. Results {#sec3} ======= The search identified 21ย 260 articles after removal of duplicates. After first- and second-level screening 12 articles remained. A further three articles were excluded because of insufficient numerical data and one was excluded because of high risk of bias. See [Fig. 1](#fig01){ref-type="fig"} for a flow chart of the process. Seven randomised controlled trials (eight published papers) met all inclusion criteria and were included in this review.^[@ref52],[@ref55]--[@ref61]^ Fig. 1Flow diagram for study selection process.^[@ref54]^ All studies were randomised at the individual level. Three studies were American,^[@ref55]--[@ref57]^ two were British,^[@ref52],[@ref61]^ one was Canadian^[@ref58]^ and one study presented in two papers was Dutch.^[@ref59],[@ref60]^ Three studies were excluded because of insufficient numeric data^[@ref62]--[@ref64]^ and one study was excluded because of unacceptably high risk of bias.^[@ref65]^ Participant characteristics {#sec3-1} --------------------------- [Table 2](#tab02){ref-type="table"} presents participant characteristics. No studies started during pregnancy. All women were included based on specific inclusion criteria for level of depressive symptoms. Four studies used the EPDS,^[@ref55]--[@ref58]^ one study used the EPDS or a DSM-II-R major depressive disorder diagnosis,^[@ref52]^ one study used a diagnosis of major depressive disorder,^[@ref61]^ and one study used the Beck Depression Inventory or DSM-IV major depressive episode or dysthymia diagnosis^[@ref59],[@ref60]^ as inclusion criteria. The mean age of the mothers at inclusion ranged from 27.7 to 32 years and mean age of the infant between 1 and 7 months. Two studies included primiparous mothers only,^[@ref52],[@ref55]^ whereas the other five included both primiparous and multiparous mothers.^[@ref56],[@ref57],[@ref59]--[@ref61],[@ref66]^ In most studies, the majority of participants were White, married/living with partner, and with a medium or long education. All studies were relatively small, ranging from 42 to 190 participants. Table 2Participant characteristicsStudyCountryChild age at startMother, mean ageDepression inclusion criteriaPrimiparous, %Socioeconomic status*n*Goodman *et al*^[@ref55]^USA6--8 weeks30.7\>9 and \<20 on the EPDS scale100White 60%, Hispanic 24%; 87% with income \>\$40ย 000; education: low 7%, medium 66%, high 27%; desired pregnancy 90%42Horowitz *et al*^[@ref56]^USA4--8 weeks31EPDS \>1066Community sample Boston, USA; well educated; majority White (69%); 71% with income \>\$50ย 000117Horowitz *et al*^[@ref57]^USA6 weeks31EPDS โ‰ฅ1056Community sample Boston, USA; White 54%, Hispanic 22%, African American 12%, other 13%; married 75%; mean income \$80ย 132; well educated (mean: 15.6 years (s.d.ย =ย 3.4))134Letourneau *et al*^[@ref58]^CanadaMean 5.2 months\~30EPDS \>1247Living near Alberta or New Brunswick, Canada; College degree or more 65%; married 67%; 55% with income \<\$40ย 00060Murray *et al*^[@ref52]^UK8 weeks postpartum27.7EPDS \>12 and DSM-III-R: major depressive disorder diagnosis100Community sample; living \<15 miles from Addenbrooke\'s Hospital, Cambridge, UK; education: low 45%, middle 31%, high 24%; high social disadvantage 25%190Stein *et al*^[@ref61]^UKMean 6.8 months32Major depressive disorder45Living in Oxfordshire, Buckinghamshire and Berkshire counties, UK; White British 83%, other 17%; education: low 41%, medium 11%, high 48%; living with father 88%144Van Doesum *et al*,^[@ref59]^ Kersten-Alvarez *et al*^[@ref60]^The NetherlandsMean 5.5 months30.0DSM-IV: major depressive episode or dysthymia and/or BDI \>1460Majority Dutch; education: low 25%, middle 50%, high, 25%; income: low 21%, middle 55%, high 24%; living with partner 90%71[^2] Intervention characteristics {#sec3-2} ---------------------------- [Table 3](#tab03){ref-type="table"} presents intervention characteristics, assessment times and outcomes. All included studies offered individual home visits initiated postpartum. No studies offered a group-based intervention. Most interventions focused on supporting the parent--child relationship through for example video feedback, coaching or therapy. Four studies offered alternative treatment such as progressive muscle relaxation, home visits or phone calls to the control group.^[@ref55],[@ref57],[@ref59],[@ref61]^ One study also provided CBT for both the intervention and control group.^[@ref61]^ Control condition was usual care in two studies^[@ref52],[@ref56]^ and waiting list in one study where participants received usual care while waiting.^[@ref58]^ Besides one intervention lasting 7.5 months,^[@ref56]^ all interventions were relatively short (3--15 weeks) and the intensity ranged between weekly visits to one visit a month. Table 3Intervention characteristics, assessment times and outcomesStudyInterventionChild age at startFormatIntensity and durationProviderControl conditionChild age at assessment (months)Outcome categoriesGoodman *et al*^[@ref55]^Perinatal dyadic psychotherapy6--8 weeksHome visits8 visits over 3 monthsNurseUsual care and phone calls4.5--5 (post-intervention)\ 7.5--8 (short term)Parent--child relationshipHorowitz *et al*^[@ref56]^Interaction coaching intervention4--8 weeksHome visits3 visits over 15 weeksNurseUsual care3--4 (post-intervention)Parent--child relationshipHorowitz *et al*^[@ref57]^Communicating and relating effectively6 weeksHome visits6 visits over 7.5 monthsNurse4 home visits9 (post-intervention)Parent--child relationshipLetourneau *et al*^[@ref58]^Peer-support intervention\<9 monthsHome visits and telephone contacts12 visits over 12 weeksTrained peer volunteersWait-list and usual care\~8 (post-intervention)Child development\ Parent--child relationshipMurray *et al*^[@ref52]^Cognitive--behavioural therapy, psychodynamic therapy and non-directive counselling8 weeksHome visitsWeekly visits for 10 weeksTherapistUsual care by GP and health visitor4.5 (post-intervention)\ 18 (long term)\ 60 (long term)Child development\ Parent--child relationshipStein *et al*^[@ref61]^Video-feedback therapy and cognitive--behaviour therapy4.5--9 monthsHome visits or alternative location11 visits over 16 weeks, 2 booster sessions at 6 and 10ย  monthsTherapistProgressive muscle relaxation and cognitive--behaviour therapy\~12 (post-intervention)\ \~24 (long term)Child development\ Parent--child relationshipVan Doesum *et al*,^[@ref59]^ Kersten-Alvarez *et al*^[@ref60]^Video-feedback intervention\<12 monthsHome visits8--10 visits over 3--4 monthsPrevention specialist3 phone calls over 3 months\~12 (post-intervention)\ \~18 (short term)\ 68 (long term)Child development\ Parent--child relationship Goodman and colleagues (2015) examined the effects of Perinatal Dyadic Psychotherapy among 42 mothers recruited from postpartum units of three hospitals located in the USA.^[@ref55]^ The study was a pilot study to examine a novel dual-focused mother--infant intervention aimed at promoting maternal mental health and improving the relationship between mother and infant. The intervention was derived from the mutual regulation model by Tronick,^[@ref67]^ and integrates clinical strategies of supportive psychotherapy, parent--infant psychotherapy, the touchpoints model of child development^[@ref68]^ and the newborn behavioral observation.^[@ref55],[@ref69]^ Horowitz and colleagues (2001) examined the effects of an interactive coaching intervention among 117 mothers from greater Boston in the USA.^[@ref56]^ The aim of the intervention was to improve responsiveness between infant and mother. The intervention is based on Beck\'s cognitive model of depression,^[@ref70]^ Sameroff\'s transactional model of child development^[@ref71]^ and Rutter\'s model of developmental risk and resilience.^[@ref56]^ Horowitz and colleagues (2013) examined the effects of the behavioural coaching intervention communicating and relating effectively on 134 mother--infant dyads in the USA.^[@ref57]^ The intervention aimed to improve the mother--infant relationship. The intervention is based on cognitive--behavioural family therapy theory.^[@ref57]^ Letourneau and colleagues (2011) examined the effects of home-based peer support among 60 mothers in Canada.^[@ref58]^ The home-based peer support included mother--infant interaction teaching as an important element and the intervention aimed to improve interactions between mothers and their infants. The intervention is not based on any specific theory, but the authors refer to concepts from studies on mothers with postpartum depression and peer support.^[@ref58]^ Murray and colleagues (2003) examined the effects of three different, but related, interventions: (a) CBT, (b) psychodynamic therapy and (c) non-directive counselling in one trial in the UK.^[@ref52]^ In total, 190 first-time mothers were included and almost equally distributed among the three intervention groups and the single control group. The interventions all aimed to improve the mother--child relationship. The CBT intervention was a modified form of interaction guidance treatment. In the psychodynamic therapy intervention, the mother\'s representations were used to explore her own attachment history. The non-directive counselling intervention provided the mothers with an opportunity to talk about their feelings about current concerns.^[@ref52]^ Stein and colleagues (2018) examined the effects of video-feedback therapy among 144 mothers in the UK. The aim of the intervention was to improve parenting behaviours (attention to infant cues, emotional scaffolding and sensitivity). Both intervention and control families also received CBT with a focus on behavioural activation. The control group received progressive muscle relaxation that did not target parent practices or parent--child interaction.^[@ref61]^ Van Doesum and colleagues (2008) examined the effects of a mother--baby intervention among 71 mothers in the Netherlands.^[@ref59],[@ref60]^ The intervention aimed to improve the mother--infant interaction, particularly maternal sensitivity. The intervention was based on video feedback, modelling, cognitive restructuring, practical pedagogical support and baby massage.^[@ref59],[@ref60]^ Outcome characteristics {#sec3-3} ----------------------- All seven studies examined parent--child relationship outcomes. Measures applied to assess parent--child relationship (for example attachment or parent--child relationship) included the Emotional Availability Scales (EAS),^[@ref59]^ Coding Interactive Behavior (CIB),^[@ref55]^ the Nursing Child Assessment Teaching Scale^[@ref57]^ and the Dyadic Mutuality Code.^[@ref56]^ Four studies assessed child development outcomes (such as socioemotional or cognitive development) with measures such as the Infant-Toddler Social Emotional Assessment (ITSEA),^[@ref59]^ the Child Behavior Checklist (CBCL)^[@ref60],[@ref61]^ and Bayley Scales of Infant Development (BSID).^[@ref61]^ Outcomes were assessed by independent assessors (for example EAS, CIB, and BSID), by parents (for example ITSEA and CBCL), and/or teachers (for example CBCL). All seven studies included a post-intervention assessment, but only four studies included a follow-up assessment.^[@ref52],[@ref55],[@ref59]--[@ref61]^ As the development of maternal depressive symptoms is not the focus of this review, we did not include maternal depression in our analyses, although it was reported in all studies. Risk of bias {#sec3-4} ------------ Risk of bias assessments for each study\'s outcomes are displayed in online supplementary Table 1 divided into parent--child relationship and child development outcomes. Six out of seven studies provided insufficient information for one or more risk of bias domain, thus hindering a clear risk of bias judgement. Only two studies^[@ref60],[@ref61]^ provided an accessible *a priori* protocol, which made it particularly difficult to assess risk of bias caused by selective reporting. In general, risk of bias ranged between low and medium. Five studies had outcomes where one or two domains were classified as medium risk of bias.^[@ref52],[@ref55],[@ref57],[@ref58],[@ref61]^ Only one study had outcomes with a high risk of bias in one domain: 'incomplete outcome data addressed'.^[@ref60]^ One study was excluded from the review because of an unacceptably high risk of bias caused by lack of assessor masking and high risk of bias in relation to 'incomplete outcome data addressed'.^[@ref65]^ The outcomes included in the meta-analysis on the parent--child relationship were characterised by low-to-medium and unclear risk of bias domain. In three of these studies,^[@ref52],[@ref56],[@ref59]^ the risk of bias domains of the outcomes were assessed as relatively low (1--2) or unclear risk of bias. The outcomes of the remaining three studies^[@ref55],[@ref57],[@ref58]^ were characterized by low-to-medium and unclear risk of bias. Parent--child relationship {#sec3-5} -------------------------- Seven studies reported on parent--child relationship outcomes.^[@ref52],[@ref55]--[@ref59],[@ref61]^ supplementary Table 2 presents the study outcomes for the individual studies. Owing to the lack of follow-up assessments, only meta-analysis of the parent--child relationship post-intervention was conducted. Post-intervention parent--child relationship {#sec3-6} -------------------------------------------- The meta-analysis of the effect of the parenting intervention on the parent--child relationship at post-intervention included 573 participants from six studies and is presented in [Fig. 2](#fig02){ref-type="fig"}.^[@ref52],[@ref55]--[@ref59]^ No significant effect of the parenting interventions on the parent--child relationship was found (*d*ย =ย 0.028, 95% CI โˆ’0.30 to 0.31) (*I*^2^ย =ย 49.02). One study^[@ref58]^ that provided a peer-support intervention instead of using professional providers was removed for sensitivity analysis. This did not alter the result substantially. Likewise, three studies offering alternative treatment for the control group^[@ref55],[@ref57],[@ref59]^ were removed for sensitivity analysis. The result of the meta-analysis of the three remaining studies that provided treatment as usual^[@ref52],[@ref56],[@ref58]^ was not altered substantially. All six studies were therefore kept in the analysis. Fig. 2Meta-analysis of studies reporting parent--child relationship at post-intervention. One out of the two parent--child relationship outcomes in the study by Murray *et al*^[@ref52]^ was based on a parent-reported questionnaire. As all other outcomes included in the meta-analysis were observational measures, this outcome was not included in the analysis. When examining relationship problems, Murray *et al* (2003) found significant positive effects of counselling, psychodynamic therapy and CBT compared with usual care (counselling: RRย =ย 0.63, 95% CI 0.32--0.97); psychodynamic: RRย =ย 0.57, 95% CI 0.28--0.92); cognitive--behavioural: RRย =ย 0.46, 95% CI 0.20--0.81).^[@ref52]^ Parent--child relationship at short-term follow-up {#sec3-7} -------------------------------------------------- Two studies reported on the parent--child relationship at short-term follow-up.^[@ref55],[@ref59]^ Van Doesum and colleagues^[@ref59]^ found significant effects of the mother--infant intervention on maternal sensitivity (*d*ย =ย 0.82, 95% CI 0.34--1.31), maternal structuring (*d*ย =ย 0.57, 95% CI 0.09--1.04), child responsiveness (*d*ย =ย 0.69, 95% CI 0.21--1.16) and child involvement (*d*ย =ย 0.75, 95% CI 0.27--1.23) at short-term follow-up when the children were around 18 months old. No significant effects were found on child attachment security, maternal non-intrusiveness or maternal non-hostility.^[@ref59]^ Goodman and colleagues likewise measured maternal sensitivity and infant involvement but found no significant effects on maternal sensitivity, infant involvement or dyadic reciprocity when the children were around 8 months old.^[@ref55]^ Parent--child relationship at long-term follow-up {#sec3-8} ------------------------------------------------- Three studies reported on the parent--child relationship at long-term follow-up.^[@ref52],[@ref60],[@ref61]^ Meta-analysis was not conducted, as the study by Stein and colleagues used an active control group, whereas the control groups in the two remaining studies received either alternative treatment or treatment as usual. Kersten-Alvarez and colleagues found no effects on maternal interactive behaviour or attachment security when the children were 4 years old.^[@ref60]^ Murray and colleagues found that none of the three interventions provided had a significant effect on child attachment at long-term follow-up (child aged 18 months) when compared with the control group.^[@ref52]^ Stein and colleagues found no significant effects on attachment security for children aged 2 years.^[@ref61]^ Child development {#sec3-9} ----------------- Four studies reported on child development outcomes. Supplementary Table 3 presents the study outcomes for the individual studies. Owing to the lack of developmental outcomes and follow-up assessments, it was not possible to conduct meta-analysis of child development. Post-intervention child development {#sec3-10} ----------------------------------- Two studies examined child development post-intervention.^[@ref52],[@ref58]^ Letourneau and colleagues found no significant effect on child cognitive and socioemotional development.^[@ref58]^ Murray and colleagues found that none of the three interventions provided had a significant effect on infant behaviour problems post-intervention.^[@ref52]^ Child development at short-term follow-up {#sec3-11} ----------------------------------------- One study, van Doesum and colleagues, examined child development at short-term follow-up.^[@ref59]^ They found a significant effect of home visits on child competence behaviour (*d*ย =ย 0.62, 95% CI 0.14--1.10), but no significant effects on externalising, internalising or dysregulated behaviour.^[@ref60]^ Child development at long-term follow-up {#sec3-12} ---------------------------------------- Three studies examined child development at long-term follow-up (range: 13--56 months).^[@ref52],[@ref60],[@ref61]^ Murray and colleagues^[@ref52]^ found significant positive effects of the counselling intervention (*d*ย =ย 0.64, 95% CI 0.22--1.05) and the psychodynamic intervention (*d*ย =ย 0.49, 95% CI 0.07--0.91) on emotional and behavioural problems when the children were 18 months old, but no effects of the cognitive--behavioural intervention at 18 months old. When the children were 60 months old, a positive significant effect of the cognitive--behavioural intervention on mother-rated emotional and behavioural difficulties (*d*ย =ย 0.61, 95% CI 0.12--1.11) was found, but not on teacher-rated emotional and behavioural difficulties. No significant effects of the two other interventions were found. None of the three interventions showed significant effects on child cognitive development at any of the two follow-up assessments. Kersten-Alvares and colleagues found no significant effects of the intervention on child self-esteem, verbal intelligence, prosocial behaviour, school adjustment and behaviour problems at long-term follow-up.^[@ref60]^ Stein and colleagues found no significant effects on cognitive and language development, behaviour problems, attention focusing, attentional shifting, inhibitory control and child emotion regulation.^[@ref61]^ Discussion {#sec4} ========== Main findings {#sec4-1} ------------- We identified eight papers representing seven trials that examined the effects of parenting interventions offered to pregnant women or mothers with depressive symptoms. As a result of the variety of assessment measures and study designs, only meta-analysis on the parent--child relationship at post-intervention was performed. We found no significant effect of parenting interventions on the parent--child relationship. When we examine the individual study outcomes, there was no significant effect on most of the child development and parent--child relationship outcomes reported in the papers. Only two studies found significant effects on a child development outcome, both positive.^[@ref52],[@ref59]^ Four studies found significant effects on a parent--child relationship outcome, three positive and one negative, ranging between a large negative effect on infant involvement and a large positive effect on child responsiveness.^[@ref52],[@ref55],[@ref56],[@ref59]^ The general lack of effect of the interventions aimed at mothers with depressive symptoms is consistent with previous reviews that examine a number of different types of interventions for mothers.^[@ref39]--[@ref44]^ Consequently, there is a need to consider whether the interventions currently offered to mothers with depressive symptoms are appropriate, or if other strategies should be tried out. Previous research shows that the most adverse child outcomes are found in high-risk populations where the depressive symptoms occur in combination with poverty and comorbid psychopathology.^[@ref32]--[@ref35]^ Families with a relatively high socioeconomic status may therefore not be affected as severely by postpartum depression as families with low socioeconomic status. The socioeconomic status of the families included in the studies of this review is relatively high; most participants are of White ethnicity, and only a relatively small number of the participants have no education or limited education and/or a low income. This may contribute to the non-significant effect on the parent--child relationship found in the meta-analysis and the generally non-significant results of the individual studies. Interpretation of our findings and avenues for further research {#sec4-2} --------------------------------------------------------------- Most studies included in this review used a screening questionnaire such as the EPDS to measure the level of depressive symptoms. A questionnaire is easy to use, but is also less precise than a depression diagnosis based on a clinical interview. Although the participants included in this review had a relatively high score on a depression screening questionnaire, they do not necessarily fulfil criteria for a clinical depression. The interventions might be effective if they were examined with a group of mothers with clinical depression, as such mothers may profit more from the intervention. Future studies could therefore be aimed at specific high-risk populations such as socioeconomically deprived mothers or women with clinical depression to examine if interventions are effective within such populations. Although there is a high correlation between antenatal depression and postpartum depression,^[@ref72]^ and the relationship with the child starts to form during pregnancy, none of the interventions of the included studies were initiated during pregnancy.^[@ref73]^ Therefore, screening for depression in pregnancy and starting treatment before the baby is born could be a focus for future research. Previous reviews have pointed out that existing studies on psychological interventions for pregnant women or mothers with depression are few and have small sample sizes.^[@ref40],[@ref41],[@ref43]^ Sample sizes of the studies included in this review ranged from 42 to 190 participants, which may have limited the power to detect significant effects. Although four of the studies included in this review include 100--190 participants, the studies are generally still small in sample size and limited in number in this updated review. We did not find any systematic differences according to study size. Future studies should have a larger sample size to increase power. In order to examine mechanisms of change, careful considerations about the need for moderator or mediator analyses should be made *a priori*, as this increases the required sample size considerably.^[@ref74],[@ref75]^ Examples of possible moderators are socioeconomic status and depression level, and a possible mediator could be reflective functioning or sensitivity, depending on the theory of change in the examined intervention. Several factors may moderate the effect of parenting interventions. First, intervention characteristics such as theoretical background may moderate the effect of parenting interventions. The interventions included in this review were, however, relatively comparable with regard to delivery, theoretical background and intensity. Most were home visits conducted by trained therapists, with weekly to monthly visits for a relatively short period. Second, group format is widely used in parenting interventions and achieves change through the dual process of emotional experience and reflection in an interpersonal context.^[@ref76],[@ref77]^ Group sessions provide a support network, reduce isolation and stigma, provide an environment in which to practice interpersonal and communication skills, and shape coping strategies and learning from each other. This may be important for mothers with depression as they may feel alone with their problems. We did not, however, find any studies that employed a group format. A group intervention (circle of security -- parenting) offered to mothers with depression is currently being evaluated in a RCT in Denmark.^[@ref78]^ The group format enables several families to be treated at once, making it cheaper than individual interventions. Finally, in recent years both practice and research have become much more aware of the important role fathers can play.^[@ref79]^ The father--child relationship might be especially important to both the mothers and the children in the context of maternal problems, such as depression, in which the father might buffer negative effects on children\'s socioemotional development. At the same time, non-optimal paternal behaviour and father--child relationships might act as an additional risk factor for problematic child development.^[@ref31],[@ref80],[@ref81]^ It is therefore crucial to consider how fathers can be involved in the support offered to new families. None of the studies in this review, however, included fathers in the intervention. Limitations {#sec4-3} ----------- We chose to include only RCTs or quasi-RCTs in the review to ensure high methodological quality and to minimise the risk of confounding factors. We consider this a strength of the review, but it may have reduced the number of included studies, thereby making it more difficult to find comparable studies for meta-analyses. Likewise, the small number of included studies hindered subgroup analysis, which must be considered a limitation. Although some studies did report short- or long-term follow-up outcomes, it was not possible to conduct meta-analysis on any follow-up outcomes. Consequently, we cannot say anything about the effects over time. Another limitation that stems from the reviewed studies is that although the two most recent studies^[@ref55],[@ref61]^ addressed implementation issues such as details about certification, supervision, fidelity and variation in the number of intervention sessions received, most studies only provide limited information about training. Therefore, when comparing across studies, we do not have a clear picture of how well the interventions were delivered and whether the results could have been affected by implementation difficulties. A final limitation is that only studies conducted in Western Organisation for Economic Co-operation and Development countries were included in this review. Since cultural norms and values related to parenting vary considerably across countries,^[@ref82]^ we chose to focus on the effectiveness of interventions offered to families in high-income countries in this review. However, the majority of the world\'s population lives in low- and middle-income countries. Therefore, it is important to conduct similar reviews focusing on studies from low- and middle-income countries. Implications for practice {#sec4-4} ------------------------- This review, based on seven studies, provides no evidence for the effect of parenting interventions for mothers with depressive symptoms on the parent--child relationship immediately after the intervention ended. As meta-analysis for child development or follow-up assessments could not be done, it remains unclear whether there are any effects on these outcomes. Despite the current accepted need to intervene within the first 1000 days of a vulnerable child\'s life, and the fact that parental depression can have serious developmental consequences for the child, we still lack high-quality studies to inform practice about how best to support vulnerable families. We also still lack systematic reviews that examine the effects of interventions for mothers with depression outside high-income countries. The authors would like to acknowledge and thank information specialists Anne-Marie Klint Jรธrgensen and Bjรธrn Christian Arleth Viinholt for running the database searches, Rikke Eline Wendt for being involved in the review process, Therese Lucia Friis, Line Mรธller Pedersen, Louise Scheel Hjorth Thomsen and Emilie Jacobsen for conducting the screening and senior researcher Trine Filges and senior researcher Jens Dietrichson for statistical advice. S.B.R. and I.S.R. were supported by a grant from the Danish Ministry of Social Affairs and Interior. M.P. was supported by the Danish Ministry of Social Affairs and the Interior and grant number 7-12-0195 from Trygfonden. Supplementary material {#sec5} ====================== For supplementary material accompanying this paper visit http://dx.doi.org/10.1192/bjo.2019.89. ###### click here to view supplementary material [^1]: **Declaration of interest:** None. [^2]: EPDS, Edinburgh Postnatal Depression Scale; BDI, Beck Depression Inventory.
USP 700 ULTRABOND STRENGTH BOND ENHANCER WHITE BLUE 1 GALLON JUG 5 GALLON PAIL Water based polymer designed to enhance the bonding between two cementitious surfaces. Improves adhesion, compressive and flexural strength when added to plaster, stucco, grouts or any mortars USP 777 BOND IT WHITE BLUE 1 GALLON JUG 5 GALLON PAIL A masonry bonding adhesive that will permanently bond new concrete, stucco and plaster to concrete or cinder blocks, bricks, concrete slabs, old concrete and plaster. Adding bond it to mortar results in high resiliency and tensile strength
Slot 1 cards plug into the slot 1 and is all thats needed to play commercial DS roms Slot 2 cards plug into the slot 2, and needs either a FlashMe firmware replacement or a passcard to operate, however has GBA and DS support. Slot 1 is becoming increasingly popular though. For newer cards, all you need is a microSD and a suitable reader, and the slot1/2 card itself. If anything else is necessary - it will be included. Does the R4 card need a passcard in order to play commercial NDS games? can it run NDS homebrew? is there a way to make 1-slot to play gba roms? I don't wanna spend more money just to buy 1-slot & 2-slot card. Would these cards work on a Australian NDS Lite? Does the R4 card need a passcard in order to play commercial NDS games?No. Quoted from QUOTE: can it run NDS homebrew?Yes, if the homebrew uses DLDI, or does not need filesystem access. Old fatlib homebrew will not be able to access files on the microSD card. Nearly all new homebrew works. Quoted from QUOTE: is there a way to make 1-slot to play gba roms? I don't wanna spend more money just to buy 1-slot & 2-slot card.There is no way, it is completely impossible. Get the EZ-V and the 3 in 1 cart if you want good DS and GBA support as cheap as possible.
Is the Southeast the New Middle East? While nowhere near as troubled and tangled as the Middle East, Southeast Asia could become Americaโ€™s next major focus, said Asian diplomats gathered for the Asean summit on Indonesiaโ€™s resort island of Bali. The endless meetings among the diplomats of the ten-member nations, defining little known treaties and organizations such as the โ€œAsean Institute for Peace and Reconciliationโ€ and the โ€œTreaty of Amity and Cooperationโ€, hardly seem like game changers in the cut-throat arena of international politics. However the buzz in Bali is that Asean โ€“ long considered a diplomatic backwater โ€“ all of a sudden is getting more global geopolitical attention than it has in decades. U.S. President Barack Obama is scheduled to arrive on Thursday and become the first ever U.S. president to attend an East Asia Summit. He and Secretary of State Hillary Clinton will be winding up a tour of the region, where they unveiled plans to start a new military base in nearby Australia, tighten trade ties with Asean members, renew the U.S.โ€™s military relationships with the Philippines and put its weight behind the regionโ€™s worries that China is getting too pushy in its claims on areas of the South China Sea. Advertisement In its more than 40 yearsโ€™ of existence, Asean has been considered mostly a talk shop. Now, the U.S. and others are interested in using it as a tool to promote stability in the region. โ€œIt is the second great game after the Middle East that is taking place here,โ€ said Kimihiro Ishikane, deputy director general of Asia and Oceanian affairs bureau of Japanโ€™s Ministry of Foreign Affairs, who is taking part in the different discussions in Bali. โ€œIt is not a game with arms or battles. It is a great game of institution building.โ€ Though few want to mention the motivation for the change in front of the media, off camera most agree it has to do with the 800-pound panda in the room: China. Asean has benefited greatly from the success of its giant neighbor to the north. China buys its commodities and goods and has become an important source of investment and finance for the region. Still a growing number of countries are concerned about Chinaโ€™s growing shadow. The country, some diplomats worry, sometimes shows signs that it wants to change its dominant economic position into political influence in Southeast Asia. These worries have forced Asean members to become more serious about working together and have also made the group of diverse countries more anxious to keep the U.S. involved in the region, diplomats said. Aseanโ€™s increased need for superpower friends has coincided with Americaโ€™s decision to untangle itself from some of its commitments in the war on terror, including Afghanistan and Iraq, and turn its focus to the Asia Pacific. โ€œAsean will be the center of new regional (dynamics), which will be built in a way to convince China that there is no way out,โ€ said Ernie Bower, senior adviser and director of the Southeast Asia program at the Center for Strategic and International Studies. โ€œThey (China) can grow like crazy and become a real global power but they have to play by the rules.โ€ Friendly but firm interactions with China over issues in Southeast Asia such as its South China Sea claims is the best way for the U.S. to signal how it hopes to coexist with the country, diplomats said. โ€œSoutheast Asia is non-threatening to anybody,โ€ said Mr. Bower. โ€œIt is the natural strategic playing field of great powers.โ€ About Southeast Asia Real Time Indonesia Real Time provides analysis and insight into the region, which includes Singapore, Thailand, Indonesia, Vietnam, Malaysia, the Philippines, Myanmar, Cambodia, Laos and Brunei. Contact the editors at SEAsia@wsj.com. E-commerce sites and mobile apps are drawing on data theyโ€™ve collected from users to better understand how and when people shop during the Islamic holy month. Hereโ€™s a look at some of what theyโ€™ve discovered. All that burning rubbish in Indonesia may be taking its toll, with nearly a quarter of people surveyed in a recent poll saying waste management was the most prominent environmental issue in the country.
| | | KlickNation acquired by EA, renamed BioWare Sacramento Electronic Arts is slowly becoming a giant katamari of social gaming companies. Mere months after acquiring PopCap, the company has announced that it has acquired KlickNation, a developer of free-to-play social role-playing games. Electronic Arts is slowly becoming a giant katamari of social gaming companies. Mere months after acquiring PopCap, the company has announced that it has acquired KlickNation, a developer of free-to-play social role-playing games. KlickNation will be rolled into its BioWare label, and the company will be renamed BioWare Sacramento. The newly acquired team will join BioWare's existing social gaming team in San Francisco to form BioWare Social, which will be focused on the development of RPG experiences for social networks. Former KlickNation CEO Mark Otero will head the combined effort. While KlickNation may not have the same kind of recognition as the Zyngas of the world, it has launched four games for Facebook since its inception in 2009: Superhero City (pictured above), Age of Champions, Starship Command, and Six Gun Galaxy. According to EA, the company was "known for combining in-depth storylines with high-quality graphics." โ€œKlickNationโ€™s expertise in building innovative and compelling RPGs for social platforms makes them a seamless tuck-in with the BioWare team at EA,โ€ BioWare's Dr. Ray Muzyka said in a press release. โ€œWe share the same creative values. The new BioWare Social unit will bring BioWare and EA franchises to the growing audience of core gamers who are looking for high quality, rich gameplay experiences on social platforms.โ€
825 N.E.2d 1 (2005) Matthew E. BURGESS, William M. Burgess, Ronald R. Clark, Yves Dambreville, Michael Escue, Brett M. Flynn, Ronald L. Hamilton, Sr., James L. Loussaint, Christine M. Patterson, Rahman Sharief and Troy M. Whittaker, Appellants-Plaintiffs, v. E.L.C. ELECTRIC, INC., Appellee-Defendant. No. 49A02-0406-CV-504. Court of Appeals of Indiana. March 22, 2005. *2 Robert S. Rifkin, Clinton E. Blanck, Maurer Rifkin & Hill, P.C., Carmel, IN, Attorneys for Appellants. *3 Michael L. Einterz, Indianapolis, IN, Attorney for Appellee. Neil E. Gath, Geoffrey S. Lohman, Fillenwarth Dennerline Groth & Towe, Indianapolis, IN, Attorneys for Amicus Curiae Indiana State Building & Construction Trades Council. Todd A. Richardson, Lewis & Kappes, Indianapolis, IN, Attorney for Amicus Curiae National Electrical Contractors Association, Central Indiana Chapter. Steve Carter, Attorney General of Indiana, Frances Barrow, Deputy Attorney General, Indianapolis, IN, Attorneys for Amicus Curiae Indiana Department of Labor. OPINION SULLIVAN, Judge. Appellants-Plaintiffs, Matthew Burgess, William Burgess, Ronald Clark, Yves Dambreville, Michael Escue, Brett Flynn, Ronald Hamilton, Sr., James Loussaint, Christine Patterson, Rahman Sharief, and Troy Whittaker (collectively "the Employees"), challenge the trial court's grant of summary judgment in favor of Appellee-Defendant, E.L.C. Electric, Inc. ("E.L.C."). Upon appeal, the Employees claim that the trial court erred in determining that their claims under the Indiana Common Construction Wage Act ("CCWA") are preempted by the federal Employee Retirement Income Security Act ("ERISA"). We reverse and remand. The record reveals that defendant E.L.C. is an electrical contracting firm based in Indiana. The Employees all worked for E.L.C. at the time relevant to this appeal. E.L.C. performed work on public construction projects in Indiana. Because of this, E.L.C. falls within the ambit of Indiana Code ยงยง 5-16-7-1 through 5-16-7-5 (Burns Code Ed. Repl. 2001), known as the Indiana Common Construction Wage Act. See Union Township School Corp. v. State ex rel. Joyce, 706 N.E.2d 183, 190 (Ind.Ct.App.1998), trans. denied. Section 1(a) of the CCWA provides that any firm, individual, partnership, limited liability company, or corporation which is awarded a contract by the state, a political subdivision, or a municipal corporation for the construction of a public work, and any subcontractor of the construction, shall pay for each class of work described in subsection (c)(1) on the project a scale of wages that may not be less than the common construction wage. Before advertising such a contract, the "common construction wage" is determined by a committee[1] established by the awarding governmental agency in the county wherein the public works project is located. Id. at ยง 1(b). This committee meets in the county to determine a classification of the labor to be used on the project, divided into three groups: skilled, semiskilled, and unskilled labor. Id. at ยง 1(c). The committee must also determine the wage per hour to be paid to each of the classes. Id. For each of the three classes of wages, the wages set shall not be less than the "common construction wage[s]" that are currently being paid in the county where the project is located. Id. at ยง 1(d). The common wage determinations are not based upon the "average" wage paid but the most commonly paid wage in the community, or, in mathematical terms, the *4 "mode." See Union Township, 706 N.E.2d at 192, 192 n. 7. Although not explicitly defined in the text of the CCWA, it has been held that the term "wage" includes fringe benefits. Id. at 191. It must be a condition of a contract awarded that the successful bidder and all subcontractors shall comply strictly with the determination of wages made under Section 1. I.C. ยง 5-16-7-1(h). The CCWA is applicable to projects either owned entirely or leased with an option to purchase by the state or political subdivision. Id. at ยง 1(j). However, the CCWA is generally not applicable to projects paid for in whole or in part with federal funds, nor to projects whose actual construction costs are less than $150,000. Id. at ยง 1(i), (k). The Indiana Department of Labor ("IDOL") is charged by statute with the enforcement of Indiana's labor laws, including the CCWA. See Union Township, 706 N.E.2d at 188 (citing Ind.Code ยง 22-1-1-16 (Burns Code Ed. Repl.1997)). In 2001, E.L.C. was audited by IDOL to check its compliance with the CCWA. At the end of this audit, IDOL sent written notice to the Employees informing them that, pursuant to the CCWA, they had been underpaid for work they had performed while employed by E.L.C. on three specific public works projects. In conducting the audit, IDOL determined the amount of compensation to be paid by looking at the total compensation paid to the Employees and was not concerned with the breakdown of compensation into actual cash wages or fringe benefits. In the present case, the relevant wage committees which established the common wages did divide the prevailing wage into separate cash wage and fringe benefit components. IDOL's auditors combined the cash wage and fringe benefit components into one sum that each of the Employees should have received. IDOL counted both ERISA plans and non-ERISA plans in calculating fringe benefits. IDOL then combined the cash wages and fringe benefits actually received by the Employees into an amount it referred to as the "Total Paid" by E.L.C. According to IDOL's audit, the total paid did not match the amount the Employees should have received, and the notices sent to the Employees listed the amount that IDOL had determined was owed to each respectively. In response to the notice letters received from IDOL, the Employees, on October 31, 2001, filed suit against E.L.C. seeking unpaid wages, "liquidated damages," and attorney fees. On February 7, 2003, E.L.C. filed a motion for declaratory judgment asking the trial court to declare the CCWA unconstitutional. Following a hearing held on July 25, 2003, the trial court denied E.L.C.'s motion on October 3, 2003, concluding that the CCWA was not unconstitutional and that E.L.C. had not been deprived of procedural due process. E.L.C. sought permission from the trial court to file an interlocutory appeal from the decision, but the trial court denied its motion.[2] On December 15, 2003, E.L.C. filed a combined motion to dismiss and motion for summary judgment. In this motion, E.L.C. claimed for the first time that the Employees' claims were preempted by ERISA. The Employees responded on January 12, 2004, arguing that their claims did not relate to any ERISA-based fringe benefit plans. There is no indication in the record that a hearing was held on E.L.C.'s combined motion. But on May 5, 2004, the trial court entered findings of fact and conclusions of law granting *5 E.L.C.'s motion for summary judgment. The Employees filed a notice of appeal on May 26, 2004. Summary judgment is appropriate only if the designated evidentiary material demonstrates that there is no genuine issue as to any material fact and that the moving party is entitled to judgment as a matter of law. Byrd v. Am. Fed. of State, County, & Municipal Employees, Council 62, 781 N.E.2d 713, 718 (Ind.Ct.App.2003). Genuine issues of material fact exist where facts are in dispute concerning an issue which would dispose of the litigation. Id. Upon appeal, we apply the same standard as the trial court and resolve disputed facts or inferences in favor of the non-moving party. Id. This court and the trial court are bound to consider only those matters which were designated to the trial court. Id. The moving party bears the burden of establishing, prima facie, that no genuine issues of material fact exist and that he or she is entitled to judgment as a matter of law. Id. at 718-19. Once the moving party has met this burden, the burden falls upon the non-moving party to set forth specific facts demonstrating a genuine issue for trial. Id. at 719. The non-moving party may not rest upon the pleadings but must set forth specific facts, using supporting materials contemplated under the rule, which show the existence of a genuine issue for trial. Id. Although the party appealing a grant of summary judgment bears the burden of persuading us that the trial court erred, we must carefully scrutinize an entry of summary judgment in order to ensure that the non-prevailing party is not denied his day in court. See id.; Jones v. W. Reserve Group/Lightning Rod Mut. Ins. Co., 699 N.E.2d 711, 713 (Ind.Ct.App.1998), trans. denied. Here, the trial court entered specific findings and conclusions, but this does not alter our standard of review. See Jones, 699 N.E.2d at 714. While such findings and conclusions offer insight into the rationale for the trial court's judgment and facilitate appellate review, they are not binding upon this court. Id. ERISA As observed by the Indiana Supreme Court: "The stated purpose of ERISA is to `protect . . . participants in employee benefit plans and their beneficiaries, by requiring the disclosure and reporting to participants and beneficiaries of financial and other information with respect thereto, by establishing standards of conduct, responsibility, and obligation for fiduciaries of employee benefit plans, and by providing for appropriate remedies, sanctions, and ready access to Federal courts.' [] 29 U.S.C. ยง 1001(b) (1998). ERISA creates a federal statutory claim for recovery of `benefits due to [the beneficiary] under the terms of his plan, to enforce his rights under the terms of the plan, or to clarify his rights to future benefits under the terms of the plan[.]' [] 29 U.S.C. ยง 1132(a)(1)(B) (1994 & Supp.1997). Suits under ยง 1132(a)(1)(B) may be brought in either federal or state court. Id. ยง 1132(e)(1)." Midwest Sec. Life Ins. Co. v. Stroup, 730 N.E.2d 163, 166 (Ind.2000) (emphasis supplied). However, ERISA does not mandate that employers provide any particular benefits. Shaw v. Delta Air Lines, Inc., 463 U.S. 85, 91, 103 S.Ct. 2890, 77 L.Ed.2d 490 (1983). ERISA does contain an explicit preemption provisions which states "[e]xcept as provided in subsection (b) of this section, the provisions of this subchapter and subchapter III of this chapter shall supersede any and all State laws insofar as they may now or hereafter relate to any employee benefit plan. . . ." 29 U.S.C. ยง 1144(a) *6 (2000). See also Stroup, 730 N.E.2d at 166. In the case at bar, the Employees claim that the trial court erred in concluding that ERISA preempts the CCWA. E.L.C. contends that the trial court did not find that the CCWA was preempted by ERISA. Instead, E.L.C. claims that the trial court "disallowed the [Employees'] claims asserting a state claim where federal law only permits an ERISA claim." Appellee's Br. 8-9. In other words, E.L.C. claims that the trial court did not determine that the CCWA was preempted, but instead disallowed the Employees' claims brought under the CCWA based upon ERISA. In arguing in support of summary judgment to the trial court, E.L.C. argued that the Employees' claims were preempted by ERISA, and the trial court's order granting summary judgment agreed with this argument. To us, this is a distinction without a difference. Whether the question is framed as one of whether ERISA preempts the relevant portions of the CCWA or as whether the Employees' claims brought under the relevant portions of the CCWA[3] are preempted by ERISA, the pertinent issues remain the same. Neither party directs us to an Indiana case directly on point, and our research has revealed none. Yet our courts have addressed ERISA preemption in several cases. Upon appeal, E.L.C. claims that the outcome of the present case is controlled by Edwards v. Bethlehem Steel Corp., 554 N.E.2d 833 (Ind.Ct.App.1990). In that case, the plaintiffs were laborers who entered into a collective bargaining agreement which provided that their employer would, on their behalf, pay certain fringe benefits into three specific funds. The employer had previously contracted to perform construction at a Bethlehem Steel plant. The employer subsequently filed for bankruptcy and did not pay certain wages and fringe benefits to the laborers. Thereafter, the laborers filed a notice of intention to hold a mechanic's lien against Bethlehem Steel and brought suit to foreclose the lien. The trial court granted summary judgment in favor of Bethlehem Steel, finding the laborers' claims preempted by ERISA. Upon appeal, the Edwards court agreed, concluding that the benefit funds at issue fell under ERISA provisions. Id. at 836. The court also concluded that the mechanic's lien statute related to ERISA in that the laborers were using the statute to "enforce conditions of their fringe benefits." Id. at 837. The court rejected the laborers' contention that the fringe benefits were provided as mere "benefits" and not under a "benefit plan." Id. After citing the relevant provisions of ERISA, the court wrote, "The `benefits' here were provided by the employer. . . and by an organization representing the employees . . . pursuant to a collective bargaining agreement. The benefits were provided under a `plan' as described in ERISA, and ERISA preempts Indiana's mechanic's lien statute." Id. In a similar vein, E.L.C. contends that the Employees are "converting" a claim for unpaid fringe benefits into one for unpaid wages, and that ERISA prohibits such claims unless brought under ERISA. The Employees' complaint alleges that E.L.C. "failed to pay Plaintiffs the full amount of fringe benefits each Plaintiff was entitled to receive according to the prevailing scale of wages being paid in the immediate locality for the class of work *7 performed by each Plaintiff." Appellant's App. at 14. Although this claim does state that the Employees were not paid fringe benefits, they base their claim not upon the terms of any benefit plan, but upon the common wage the Employees were entitled to receive under the CCWA. Indeed, in their prayer for relief, the Employees requested "damages equal to the amount of unpaid wages representing the difference between the amount each Plaintiff was paid by ELC and the amount each Plaintiff should have received from ELC had he or she been paid the prevailing wage scale rate as required by statute." Id. Thus, the Employees are not seeking to enforce conditions of their fringe benefits as in Edwards, supra, but instead claim that the total wages they were entitled to under the CCWA were not paid because they did not receive the full amount of fringe benefits E.L.C. claims to have paid them. In other words, the Employees do not seek to enforce their fringe benefits under the terms of any benefit plan, but instead seek the cash value of what they allege was owed to them based upon E.L.C.'s obligations under the CCWA. The Employees further point out that the holding in Edwards was later limited by the court in Seaboard Surety Co. v. Indiana State District Council of Laborers & Hod Carriers Health & Welfare Fund, 645 N.E.2d 1121 (Ind.Ct.App.1995). In Seaboard, the plaintiffs were employee welfare benefit plans, labor organizations, and laborers employed by a subcontractor working on a wastewater treatment project. Because the project was a public work, the main contractor, pursuant to Indiana Code ยง 36-1-12-13.1 (Burns Code Ed. Repl.2000), obtained payment and performance bonds from Seaboard Surety. The bonds insured the subcontractor's payment of fringe benefits contributions to its employees as required under a collective bargaining agreement. When the subcontractor defaulted on its contributions and filed for bankruptcy, the plaintiffs brought suit. The trial court concluded that the claim on the surety bond was preempted by ERISA. Upon appeal, the Seaboard court reversed the trial court and held that the claims were not preempted. 645 N.E.2d at 1127-28. The court distinguished the Edwards case, noting that in that case the plaintiffs sought to enforce a lien against a stranger to the obligations to pay benefits and that there was a concern of double recovery. Id. at 1126-27. The Seaboard court held that although a state statute which interferes with an ERISA function is preempted, a supplemental remedy provided by state law which did not interfere with ERISA as did the suit on the bond at issue is not preempted.[4]Id. at 1127. "[A] state law will not automatically be preempted because it provides a supplemental remedy to collect delinquent contributions, unless the state law refers to benefit plans, singles them out for special treatment, or actually conflicts with ERISA." Id. The court stated, "we choose not to extend [the] holding in Edwards beyond its application to the mechanic's lien statute." Id. The court further determined that the plaintiffs' claim did not relate to an employee benefit plan because its effect was "too tenuous, remote, and peripheral" to *8 warrant a finding that the claim "relate[s] to" the plan. Id. Key to this conclusion were that Indiana has required bonds on public works and allowed recovery against those bonds since the nineteenth century and that the bond requirement and remedy are traditional exercises of state authority. Id. Too, the challenged laws affected relations between the plan and a surety, not relations among the parties to the ERISA plan, and any effect on the plan itself was merely incidental, i.e. the state laws challenged did not change benefit eligibility or the way benefits are calculated. Id. Here, we note that Indiana has required that employers working on public works pay a prevailing or common wage since the first half of the last century and that such is a traditional exercise of state authority. This does not support a finding of ERISA preemption. See id. However, unlike the laws at issue in Seaboard, the CCWA affects the relations between the parties to the benefit plans ย— the Employer, E.L.C., and the Employees as plan beneficiaries. Nevertheless, as more fully discussed below, we conclude that any effect the CCWA might have on ERISA plans is incidental in nature. In Stroup, supra, our Supreme Court addressed the issue of ERISA preemption of state claims for breach of contract and bad faith. In that case, the Stroups received an ERISA-governed group health insurance policy from Midwest through Mr. Stroup's employment. Midwest approved orthognathic surgery for Mrs. Stroup to correct certain congenital problems with her jaw, but complications arose from this surgery that required another procedure shortly thereafter. Mrs. Stroup continued to have problems with her jaw following the surgeries, and non-surgical treatments were not successful. However, four months after the second surgery, Midwest amended its plan to exclude orthognathic surgery. When Mrs. Stroup sought pre-determination for another surgery, the surgery was not considered as a continuation of the prior treatment but was instead approved under another coverage provision which limited benefits to $1,000 per year. To avoid the costs of surgery, Mrs. Stroup opted to continue non-surgical treatments. Such treatments were unsuccessful, and Mrs. Stroup's jaw eventually fractured, requiring extensive surgeries. Thereafter, the Stroups filed suit against Midwest, later amending their complaint to add claims for breach of contract and bad faith. The trial court held that the claims were not preempted by ERISA, and upon interlocutory appeal, the Court of Appeals reversed.[5] Upon transfer, our Supreme Court held that the Stroups's claims were preempted. Stroup, 730 N.E.2d at 167. Specifically, the court determined that the claims clearly "relate[d] to" employee benefit plans, and therefore fell within the "broad preemption" provisions of ERISA: "These claims are based on Midwest's failure to pay benefits due under an ERISA-governed pension plan. The complaint asks for damages for breach of the insurance contract and for punitive and compensatory damages for the tort of bad faith based on Midwest's denial of coverage under the insurance contract. The claims clearly have connection with and refer to the ERISA plan. The essence of the claims is a failure to supply benefits under the plan. . . . Just as in Ingersoll-Rand [Co. v. McClendon, 498 U.S. 133, 140, 111 S.Ct. 478, 112 L.Ed.2d 474 (1990)], `there simply *9 is no cause of action if there is no plan.'" Id. at 166-67, 111 S.Ct. 478. Here, we are unable to say that if there is no plan, there is no cause of action; the essence of the claim is not the failure to pay benefits due under the terms of any plan, but the failure to make up any obligation under the CCWA in cash. See id. at 167, 730 N.E.2d 163. The CCWA operates irrespective of ERISA, and there could be a cause of action whether there was no benefits plan, an ERISA-plan, or a non-ERISA plan. Cf. WSB Elec., Inc. v. Curry, 88 F.3d 788, 793 (9th Cir.1996) (state prevailing wage statute not preempted by ERISA where prevailing wage was measured by reference to employer's fringe benefits, but such costs were calculated without regard to whether they consisted of contributions to ERISA plans and the employer's obligations to pay the prevailing wage did not depend upon the existence or operation of ERISA plans). Since there are no controlling Indiana cases, we look to other jurisdictions for guidance. In doing so, we note that decisions of the United States Supreme Court pertaining to federal issues are binding on state courts but that decisions of lower federal courts, although they may be persuasive, are not binding upon state courts. Ind. Dep't of Pub. Welfare v. Payne, 622 N.E.2d 461, 468 (Ind.1993). In California Division of Labor Standards Enforcement v. Dillingham Construction N.A., Inc., 519 U.S. 316, 324, 117 S.Ct. 832, 136 L.Ed.2d 791 (1997), the United States Supreme Court explained that, since the enactment of ERISA, the Court has "endeavored with some regularity to interpret and apply the `unhelpful text' of ERISA's pre-emption provision." The Court had long acknowledged that ERISA's preemption provision is "clearly expansive," has a "broad scope," an "expansive sweep," and that it is "broadly worded," "deliberately expansive," and "conspicuous for its breadth." Id. (citations and internal quotations omitted). The Court's efforts in applying the preemption provision have yielded a two-part inquiry: a law relates to a covered employee benefit plan for purposes of ERISA's preemption provision if it (1) has a connection with or (2) reference to such a plan. Id. Although earlier decisions liberally interpreted this test and portrayed the scope of ERISA preemption as clearly expansive, more recent Court decisions have been more restrictive. See State v. Phillips, 238 Wis.2d 279, 617 N.W.2d 522, 527 (Ct.App. 2000).[6] Thus, the Court has held that where a State's law acts "immediately and *10 exclusively" upon ERISA plans, or where the existence of ERISA plans is essential to the law's operation, that "reference" will result in preemption.[7]Dillingham, 519 U.S. at 325, 117 S.Ct. 832. (citations omitted). Here, we are convinced that the CCWA does not "refer to" ERISA or ERISA plans. The text of the CCWA makes no mention of ERISA or fringe benefits of any type. Indeed, fringe benefits are considered as a "wage" under the CCWA only pursuant to case law. See Union Township, 706 N.E.2d at 191. Our General Assembly enacted a prevailing wage law decades before the enactment of ERISA. Moreover, the designated evidence favoring the Employees reveals that in applying the CCWA during the audit of E.L.C.'s compliance with the CCWA, IDOL was indifferent to whether an employer's obligations under the CCWA was met through cash payments, contributions toward non-ERISA benefits, or contributions toward ERISA-governed benefit plans. Although IDOL's interpretation of the law is not binding upon this court,[8] we do agree that neither the CCWA nor the definition of "wage" as interpreted by case law makes any distinction between cash payments or fringe benefits. As explained by the court in Union Township, a "wage" for purposes of the CCWA includes: "`Every form of remuneration payable for a given period to an individual for personal services, including salaries commissions, vacation pay, dismissal wages, bonuses and reasonable value of board, rent, housing, lodging, payments in kind, tips, and any other similar advantage received from the individual's employer or directly with respect to work for him. [The] term should be broadly defined and includes not only periodic monetary earnings but all compensation for services rendered without regard to the manner in which such compensation is computed.'" 706 N.E.2d at 191 (quoting Johnson v. Wiley, 613 N.E.2d 446, 450 n. 3 (Ind.Ct.App.1993)).[9] This definition makes no distinction between cash payments, fringe benefits governed by ERISA, or fringe benefits not governed by ERISA. Instead, it includes all compensation for services rendered. Thus, it cannot be said that the CCWA acts immediately or exclusively upon ERISA. See Dillingham, 519 U.S. at 325, 117 S.Ct. 832. Moreover, it has been held that a statute "refers to" ERISA if it mentions or alludes to ERISA plans and has some effect on the referenced plans. *11 Curry, 88 F.3d at 793. Like the California prevailing wage scheme at issue in Curry, the CCWA does not force employers to provide any particular benefit or benefit plan, alter any existing plan, or even provide ERISA-governed benefit plans at all. See id. It simply requires employers working on public works projects to pay common wages, and this obligation can be met through any form of compensation for services rendered, including fringe benefits. However, a law which does not "refer to" ERISA may yet be preempted by ERISA if it has a "connection with" ERISA plans. Dillingham, 519 U.S. at 325, 117 S.Ct. 832. To determine whether a state law has the forbidden "connection with" ERISA plans, courts must look to the objectives of the ERISA statute as a guide to the scope of the state law that Congress understood would survive as well as the nature of the effect of the state law on ERISA plans. Id. (citing N.Y. State Conference of Blue Cross & Blue Shield Plans v. Travelers Ins. Co., 514 U.S. 645, 658-59, 115 S.Ct. 1671, 131 L.Ed.2d 695 (1995)). As noted in Dillingham, the Court in Travelers had previously recognized that "an `uncritical literalism' in applying this standard offered scant utility in determining Congress' intent as to the extent of [ERISA's preemption provision]'s reach." Id. at 325, 117 S.Ct. 832 (quoting Travelers, 514 U.S. at 656, 115 S.Ct. 1671). Nevertheless, the Court has emphasized that in preemption jurisprudence, where "federal law is said to bar state action in fields of traditional state regulation, . . . we have worked on the `assumption that the historic police powers of the States were not to be superseded by the Federal Act unless that was the clear and manifest purpose of Congress.'" Travelers, 514 U.S. at 655, 115 S.Ct. 1671 (quoting Rice v. Santa Fe Elevator Corp., 331 U.S. 218, 230, 67 S.Ct. 1146, 91 L.Ed. 1447 (1947)). Accord Dillingham, 519 U.S. at 331, 117 S.Ct. 832 (addressing the "connection with" test with the presumption that ERISA did not intend to supplant the California statute at issue). Here, wage regulation is clearly an area within the historic police powers of the states. See Dillingham, 519 U.S. at 330, 117 S.Ct. 832 ("the wages paid on state public works have long been regulated by the States."). Indeed, Indiana has regulated wages paid on public works projects for seventy years. However, that the State has regulated this area does not, of itself, immunize its efforts from the effects of ERISA preemption. See id. Nevertheless, "[t]he wages to be paid on public works projects . . . are . . . quite remote from the areas with which ERISA is expressly concerned ย— `reporting, disclosure, fiduciary responsibility, and the like.'" Id. (quoting Travelers, 514 U.S. at 661, 115 S.Ct. 1671). A reading of ERISA's preemption provision resulting in the preemption of traditionally state-regulated substantive law in those areas where ERISA is silent would be "`unsettling.'" Id. (quoting Travelers, 514 U.S. at 665, 115 S.Ct. 1671). As the Dillingham Court concluded: "Given the paucity of indication in ERISA and its legislative history of any intent on the part of Congress to preempt state apprenticeship training standards, or state prevailing wage laws that incorporate them, we are reluctant to alter our ordinary assumption that the historic police powers of the States were not to be superseded by the Federal Act. Accordingly, as in Travelers, we address the substance of the California statute with the presumption that ERISA did not intend to supplant it." 519 U.S. at 331, 117 S.Ct. 832. (emphasis supplied) (citations and internal quotations omitted). *12 The Dillingham Court contrasted a case in which the state laws at issue were found to have a "connection with" ERISA with one in which the state law was not found to have such a connection. In Shaw v. Delta Air Lines, Inc., 463 U.S. 85, 97, 103 S.Ct. 2890, 77 L.Ed.2d 490 (1983), the Court held that the New York Human Rights Law, which prohibited employers from structuring their employee benefit plans in a manner that discriminated against pregnant women, and New York's Disability Benefits Law, which required employers to pay employees specific benefits, were preempted by ERISA. The Dillingham Court stated that Shaw, and other cases where preemption was found, involved state statutes which mandated employee benefit structures or their administration, and that such requirements amounted to a "connection with" ERISA plans. 519 U.S. at 328, 117 S.Ct. 832. In contrast, in Travelers, supra, the state statute at issue regulated hospital rates and required hospitals to exact surcharges from patients whose hospital bills were paid by certain insurance providers not associated with Blue Cross/Blue Shield. Because ERISA plans were among the purchasers of insurance, the statute was claimed to invoke ERISA preemption. The proponents of preemption argued that the differential rates charged made non-Blue Cross insurance coverage more expensive and less attractive. Therefore, insurance purchasers, including ERISA plans, were encouraged to purchase coverage through Blue Cross. The Travelers Court upheld the statute, finding no preemption. 514 U.S. at 659, 115 S.Ct. 1671. The "indirect economic influence" of the statute did not bind plan administrators to any particular choice and thus did not act as a regulation of an ERISA plan. Id. Nor did the indirect influence preclude uniform administrative practice or the provision of a uniform interstate plan if a such plan was desired. Id. at 660, 115 S.Ct. 1671. Thus, the Dillingham Court wrote: "Indeed if ERISA were concerned with any state action ย— such as medical-care quality standards or hospital workplace regulations ย— that increased the costs of providing certain benefits, and thereby potentially affected the choices made by ERISA plans, we could scarcely see the end of ERISA's pre-emptive reach, and the words `relate to' would limit nothing." 519 U.S. at 329, 117 S.Ct. 832. Based upon this Supreme Court precedent, we are unable to conclude in the present case that the CCWA has a "connection with" ERISA for purposes of federal preemption. Although our issue was not the precise issue before it, the Dillingham Court stated that there was little indication in ERISA that Congress intended to preempt state apprenticeship laws "or the state prevailing wage laws that incorporate them." 519 U.S. at 331, 117 S.Ct. 832 (emphasis supplied). The wages to be paid on public works projects are quite remote from the areas with which ERISA is expressly concerned. Id. at 330, 117 S.Ct. 832. Unlike the state law at issue in Shaw, the CCWA does not mandate how employee benefit plans must be structured or administered. In short, we conclude that the CCWA does not have a "connection with" ERISA or ERISA plans sufficient to warrant preemption. Several lower federal courts and state courts have considered the relationship between ERISA and state prevailing wage laws and come to a similar conclusion. See Minnesota Chapter of Associated Builders & Contractors, Inc. v. Minnesota Dep't of Labor & Indus., 47 F.3d 975 (8th Cir.1995) (prevailing wage law not preempted by ERISA where it did not require employer to provide any level of benefit and only *13 affected benefit plans to the extent that the employer chose to include benefits as part of the wage); Keystone Chapter, Associated Builders & Contractors, Inc. v. Foley, 37 F.3d 945 (3d Cir.1994) (ERISA did not preempt state prevailing wage law where that law did not single out ERISA plans for special treatment or even refer to such plans, and in the absence of ERISA, the wage law could be meaningfully applied), cert. denied 514 U.S. 1032, 115 S.Ct. 1393, 131 L.Ed.2d 244 (1995); Phillips, 238 Wis.2d 279, 617 N.W.2d 522 (state prevailing wage law did not reference ERISA sufficient to warrant preemption because the existence of ERISA plans was not essential to the law's operation); Felix A. Marino Co., Inc. v. Comm'r of Labor & Indus., 426 Mass. 458, 689 N.E.2d 495 (1998) (no ERISA preemption of state prevailing wage law where law did not act immediately and exclusively on ERISA plans or depend on existence of such plans); Ironworkers Dist. Council of the Pac. Northwest v. Woodland Park Zoo Planning & Dev., 87 Wash.App. 676, 942 P.2d 1054 (1997) (state prevailing wage law was not preempted by ERISA where the total prevailing wage included a benefits component, but employers were not required to establish benefits programs).[10] The Second Circuit in General Electric Co. v. New York State Department of Labor, 891 F.2d 25 (2d Cir.1989), cert. denied 496 U.S. 912, 110 S.Ct. 2603, 110 L.Ed.2d 283 (1990), held that New York's prevailing wage law as then enforced was preempted by ERISA. Specifically, the State followed a "line-item" approach whereby the Commissioner of the Department of Labor prescribed prevailing benefit levels for each individual type of wage supplement.[11]Id. at 26-27; Burgio & Campofelice, Inc. v. N.Y. State Dep't of Labor, 107 F.3d 1000, 1003 (2d Cir.1997). Where the employer did not provide wage supplements at the required level, the wage law required the employer to either bring the cost of its prescribed benefit up to the local prevailing level or pay the additional costs directly in cash. The General Electric court held that this violated ERISA because the employer was required "to make continuous calculations, adjustments and payments" and that such burdens violated the purpose of ERISA. 891 F.2d at 29. We observe, however, that General Electric was decided before the U.S. Supreme Court began to reign in the expansive scope of ERISA preemption. Also, the Ninth Circuit in Curry, supra, held that it is not enough that a state law impose *14 additional administrative burdens regarding benefit contributions upon the employer. See 88 F.3d at 795. Instead, the additional administrative burdens must be placed on the ERISA plan. Id. More importantly, in Burgio, supra, the Second Circuit distinguished the General Electric holding. By the time of the Burgio opinion, New York had abandoned the "line-item" approach and now simply required employers to match the total costs of all the prevailing supplements. 107 F.3d at 1004. Employers were no longer required to match one-for-one the specific prevailing rate for each individual supplement, or even provide that type of supplement. Id. Instead, employers were simply required to match the total costs of all prevailing supplements. Id. As the court wrote, "[the employer]'s total liability would thus be the same whether the subcontractor had bargained to provide benefits exclusively through ERISA plans, exclusively through non-ERISA plans, through additional cash wages, or through some combination of the three." Id. at 1009. The same can be said of the Indiana scheme. The CCWA merely requires that the specified common wage be paid, and the employer's obligation may be met through any form of remuneration, including wages and fringe benefits. See Union Township, 706 N.E.2d at 191. The Second Circuit reaffirmed its position in HMI Mechanical Systems, Inc. v. McGowan, 266 F.3d 142, 151 (2d Cir.2001), holding that ERISA did not preempt the New York prevailing wage scheme where the state sought to examine total contributions and in no way required employers to make particular contributions to an ERISA plan, and the state was not examining employer contributions on a benefit by benefit basis.[12]Cf. Dillingham, 519 U.S. at 327-28, 117 S.Ct. 832 (holding that California's prevailing wage act did not refer to ERISA where the act required payment of prevailing wages to workers in apprenticeship programs that had not received state approval but allowed payment of lower wages to workers in approved apprenticeship programs because the statute functioned irrespective of the existence of an ERISA plan and was indifferent to the funding of apprenticeship programs). E.L.C. also argues that, in determining the value of the fringe benefits given to E.L.C.'s employees, IDOL apparently "disallowed" some of the associated expenses which E.L.C. claims should be counted in determining the cost of the fringe benefits it paid. Specifically, E.L.C. claims that IDOL disallowed certain costs associated with self-administration of the fringe benefits. First, we note that E.L.C. does not explain what it means in claiming that ERISA would "allow" such expenses. E.L.C. cites no provision in ERISA which would require that specific administrative expenses be counted towards an employer's common wage obligations. Furthermore, we again state that IDOL's determinations of law are not binding upon courts. Indeed, as the trial court concluded in denying E.L.C.'s motion for a declaratory judgment, which neither party challenges upon appeal: *15 "26. The opinions issued by [IDOL] at the conclusion of the audit process do not affect the legal rights of the employers. Rather, opinion letters are issued pursuant to [IDOL]'s statutory duty to inspect records and communicate its findings in order to monitor and ensure compliance with the labor laws. 27. The letters to employees do not grant employees the right to sue. The letters are advisory only, and thus they do not constitute administrative decisions from which there is a right to an administrative appeal." Appellant's App. at 218 (citations omitted). If the letters and opinions of IDOL are not binding, then IDOL's "disallowance" of certain expenses during the audit do not appear to be grounds for ERISA preemption of the Employees' claims. However, the Employees do appear to base their claims upon the same reasoning as used by IDOL during the audit. So far as we are able to determine, E.L.C. argues that IDOL's enforcement of the CCWA places additional administrative burdens and costs upon it, allegedly in violation of ERISA. The trial court apparently accepted this argument, stating in the order granting summary judgment, "ERISA prohibits state regulations or a state action that places administrative burdens and costs on ERISA plans maintained by E.L.C. Electric, Inc." Appellant's App. at 11. One of the authorities cited for this contention is Keystone, 37 F.3d at 959 n. 19, where the court stated, "While state regulations may affect the cost of doing business in a state, they may not, consistent with ERISA, place administrative burdens and costs on ERISA plans that make it impractical for an employer to provide a nationwide plan." A state law may be preempted by ERISA if it imposes additional administrative requirements for ERISA plans. See Curry, 88 F.3d at 795. Here, to the extent that E.L.C. argues that IDOL's interpretation of the CCWA imposes additional administrative requirements, we disagree. It is not enough that the state law impose additional administrative burdens regarding benefits contributions on the employer. See id. E.L.C. has designated no evidence that it is an ERISA plan administrator. An employee benefit program that is not funded through a separate fund, but is instead paid out of an employer's general assets, is not an ERISA plan. See Dillingham, 519 U.S. at 326, 117 S.Ct. 832. We therefore reject E.L.C.'s claims regarding the imposition of administrative burdens.[13] In essence, the Employees claim that E.L.C., by not paying to them certain fringe benefits E.L.C. claimed to have paid, owed the Employees cash wages based upon the common wages established pursuant to the CCWA. E.L.C. claims that the money it spent should count towards its wage obligations. We agree with the argument of the amici curiae that to the extent that E.L.C. disagrees with IDOL's audit results, it should have the opportunity to challenge such findings at trial when it comes to determining how much, if any, unpaid wages E.L.C. owes the Employees.[14]See Brief of Amicus Curiae Indiana State Building and Construction *16 Trades Council at 15-16; Brief of Amicus Curiae Indiana Department of Labor at 17. In conclusion, the CCWA, and the Employees' claims based thereon, are neither "connected with" nor "refer to" ERISA in such a manner as to warrant application of ERISA's preemption provision. The trial court erred in concluding otherwise. To the extent that E.L.C. claims that it did not receive proper credit for certain expenses in IDOL's audit, this is an issue which should be resolved at trial. The judgment of the trial court is reversed, and the cause is remanded for further proceedings consistent with this opinion. NAJAM, J., and BARNES, J., concur. NOTES [1] This committee is made up of five members: a labor representative appointed by the president of the state federation of labor, an industry representative appointed by the awarding agency, a member named by the governor, a local taxpayer appointed by the owner of the project, and a taxpayer appointed by the legislative body of the county where the project is located Id. at ยง 1(b). [2] E.L.C. does not bring a cross-appeal from the trial court's ruling upon this matter. [3] We observe that the text of the CCWA does not create a private cause of action for failure to comply with the requirements of the act. However, in Stampco Construction Co., Inc. v. Guffey, 572 N.E.2d 510 (Ind.Ct.App.1991), the majority, over the dissent of Judge Buchanan, held that such a private cause of action could be brought. [4] E.L.C. makes no mention of this Seaboard case or its effect on the holding in Edwards. E.L.C. does cite to Indiana Carpenters Central and Western Indiana Pension Fund v. Seaboard Surety Co., 601 N.E.2d 352 (Ind.Ct.App.1992), but provides page citations which correspond with the 1995 Seaboard opinion. In any event, E.L.C.'s assertion that ERISA preempts state laws which purport to supplement employee's remedies is unsupported by either case. [5] See Midwest Sec. Life Ins. Co. v. Stroup, 706 N.E.2d 201 (Ind.Ct.App.1999), trans. granted. [6] As noted by the Wisconsin Court of Appeals in Phillips, Justice Scalia, joined by Justice Ginsburg, wrote separately in Dillingham to state that ERISA preemption should be governed by traditional concepts of field preemption and conflict preemption. 617 N.W.2d at 527 n. 4 (citing Dillingham, 519 U.S. at 336, 117 S.Ct. 832 (Scalia, J., concurring)). The Dillingham majority nevertheless applied the two-part test for ERISA preemption. 519 U.S. at 324, 117 S.Ct. 832. However, as further noted in Phillips, the Court in Boggs v. Boggs, 520 U.S. 833, 841, 117 S.Ct. 1754, 138 L.Ed.2d 45 (1997), decided the same term as Dillingham, seemed to further distance itself from the ERISA-specific tests for preemption and applied conflict preemption principles to decide the case. 617 N.W.2d at 527 n. 4. Indeed, the Boggs Court went so far as to say, "We hold that there is a conflict, which suffices to resolve the case. We need not inquire whether the statutory phrase `relate to' provides further and additional support for the pre-emption claim. Nor need we consider the applicability of field pre-emption." 520 U.S. at 841, 117 S.Ct. 1754. But because the Supreme Court has not explicitly abandoned the two-part analysis, we will apply this test. See Phillips, 617 N.W.2d at 527 n. 4. [7] Under this "reference to" prong of the test, the Supreme Court has held preempted a law that imposed requirements by reference to ERISA-covered programs, a law that specifically exempted ERISA plans from an otherwise generally applicable garnishment provision, and a common-law cause of action premised on the existence of an ERISA plan. Id. at 324-25, 117 S.Ct. 832 (citations omitted). [8] When an appeal involves an agency's determination of a question of law, we are not bound by that agency's interpretation of the law, but rather we determine whether the agency correctly interpreted and applied the law. Miller Brewing Co. v. Bartholemew County Beverage Co., Inc., 674 N.E.2d 193, 200 (Ind.Ct.App.1996). Although an agency's interpretation of the statutes and regulations which the agency is charged to enforce is entitled to some weight, if an agency's interpretation is erroneous, it is entitled to no weight. Id. Ultimately, however, we are charged with the responsibility of resolving questions of statutory interpretation and thus are not bound by an agency's interpretation of a statute or rule. Id. [9] The Johnson court was in turn quoting from Black's Law Dictionary 1579 (6th ed. 1990). [10] We also note, albeit parenthetically, that Judge Miller of the United States District Court for the Northern District of Indiana came to a similar conclusion in an unpublished memorandum order in Boatman v. Dilling Mechanical Contractors, Inc., 1993 WL 463229 (N.D.Ind.1993). In the order, Judge Miller concluded that ERISA did not preempt the plaintiffs' claims for unpaid wages under the CCWA, and thus the defendant did not have a right to remove the case to federal court. In support of this decision, the court noted that, on its face, the CCWA made no reference to any employee benefit plans, but merely required the establishment of minimum hourly wages and did not regulate pensions or insurance benefit levels. Id. at *2. Despite the fact that the complaint made reference to unpaid health insurance and pension benefits, the court noted that the request for relief was limited to a claim for unpaid wages. Id. at *2-3. Therefore, the complaint was not a complaint about the pensions and did not trigger ERISA preemption and the defendant's right to removal. Id. at *3. [11] These wage supplements were defined to include "all remuneration for employment paid in any medium other than cash, or reimbursement for expenses, or any payments which are not `wages' within the meaning of the law. . . ." General Electric, 891 F.2d at 27. [12] We recognize that there are some courts which have held that state prevailing wage laws were preempted. See, e.g., City of Des Moines v. Master Builders of Iowa, 498 N.W.2d 702 (Iowa 1993); Constr. & Gen. Laborer's Distr. Council of Chicago v. James McHugh Constr. Co., 230 Ill.App.3d 939, 172 Ill.Dec. 740, 596 N.E.2d 19 (1992). However, although we might factually distinguish these cases, we simply decline to adopt the approach of these cases, concluding that the federal and state cases cited above better represent modern ERISA preemption jurisprudence. [13] E.L.C. also claims that the type of expenses IDOL disallowed in the audit of E.L.C. have been allowed in audits of other companies. E.L.C. never fully develops this argument, and we fail to see how this could be grounds for ERISA preemption of the Employees' claims. [14] Of course, in determining this factual issue, the trial court will be bound not to use any method which would conflict with ERISA.
/////////////////////////////////////////////////////////////////////////////// // Name: src/common/addremovectrl.cpp // Purpose: wxAddRemoveCtrl implementation. // Author: Vadim Zeitlin // Created: 2015-01-29 // Copyright: (c) 2015 Vadim Zeitlin <vadim@wxwidgets.org> // Licence: wxWindows licence /////////////////////////////////////////////////////////////////////////////// // ============================================================================ // declarations // ============================================================================ // ---------------------------------------------------------------------------- // headers // ---------------------------------------------------------------------------- // for compilers that support precompilation, includes "wx.h". #include "wx/wxprec.h" #ifdef __BORLANDC__ #pragma hdrstop #endif #if wxUSE_ADDREMOVECTRL #ifndef WX_PRECOMP #endif // WX_PRECOMP #include "wx/addremovectrl.h" #include "wx/private/addremovectrl.h" // ============================================================================ // wxAddRemoveCtrl implementation // ============================================================================ // ---------------------------------------------------------------------------- // common part // ---------------------------------------------------------------------------- extern WXDLLIMPEXP_DATA_ADV(const char) wxAddRemoveCtrlNameStr[] = "wxAddRemoveCtrl"; bool wxAddRemoveCtrl::Create(wxWindow* parent, wxWindowID winid, const wxPoint& pos, const wxSize& size, long style, const wxString& name) { if ( !wxPanel::Create(parent, winid, pos, size, style, name) ) return false; // We don't do anything here, the buttons are created when we're given the // adaptor to use them with in SetAdaptor(). return true; } wxAddRemoveCtrl::~wxAddRemoveCtrl() { delete m_impl; } void wxAddRemoveCtrl::SetAdaptor(wxAddRemoveAdaptor* adaptor) { wxCHECK_RET( !m_impl, wxS("should be only called once") ); wxCHECK_RET( adaptor, wxS("should have a valid adaptor") ); wxWindow* const ctrlItems = adaptor->GetItemsCtrl(); wxCHECK_RET( ctrlItems, wxS("should have a valid items control") ); m_impl = new wxAddRemoveImpl(adaptor, this, ctrlItems); } void wxAddRemoveCtrl::SetButtonsToolTips(const wxString& addtip, const wxString& removetip) { wxCHECK_RET( m_impl, wxS("can only be called after SetAdaptor()") ); m_impl->SetButtonsToolTips(addtip, removetip); } wxSize wxAddRemoveCtrl::DoGetBestClientSize() const { return m_impl ? m_impl->GetBestClientSize() : wxDefaultSize; } #endif // wxUSE_ADDREMOVECTRL
[Two different kinds of total hip arthroplasty for unilateral Crowe IV developmental dysplasia of the hip in adults]. To compare the clinical effects of total hip arthroplasty(THA) with non-osteotomy and subtrochanteric osteotomy in the treatment of Crowe type IV hip dysplasia (DDH) in adults. Data of 35 Crowe type IV DDH patients who underwent THA were analyzed retrospectively, the patients were divided into two groups:15 cases of non-osteotomy and 20 cases of subtrochanteric osteotomy. There was no significant difference in age, gender, body mass index between two groups (P>0.05). The operative time, bleeding volume, hospitalization duration, Harris hip score and the limb length discrepancy (LLD) were evaluated. All of the patients were followed up for 12 to 48 months, no prosthesis loosening or infection occurred by the end of follow-up. In non-osteotomy group, 1 case had occurred by sciatic nerve injury and 1 case developed cutaneous branch injury of the femoral nerve, both of which were spontaneously recovered completely without treatment after 3 months. One case of dislocation occurred in subtrochanteric osteotomy group, after closed reduction, dislocation did not recur; three cases had proximal femoral crack fractures and received steel plate fixation; no reoperation was needed. There was significant difference in operation duration, bleeding volume, and hospitalization days between two groups(P<0.05). The Harris score at last follow-up was significantly increased compared with preoperative score in two groups(P<0.05), but there was no significant difference between two groups(P>0.05). The postoperative discrepancy of bilateral lower limbs had significant difference(P<0.05). THA with no femoral shortening osteotomy can achieve good clinical results in patients with unilateral Crowe IV developmental dysplasia of hip. Comparing with subtrochanteric osteotomy, the procedure of no femoral shortening osteotomy is easier technically. For unilateral high dislocation DDH patients with limb lengthening <=4 cm and good tissue conditions, THA without femoral osteotomy may be considered.
The invention relates generally to a method and apparatus for accurately positioning a focused energy beam, and in particular to a method and apparatus for positioning a focused laser beam, very precisely and at high speed, over a complex integrated circuit surface. When integrated circuits are manufactured, many of the circuits are defective and until recently were disposed of as being uncorrectable. The ratio of the good circuits to the total number of circuits manufactured, often termed the yield of a manufacturing process, is very important to the profitability of a semiconductor manufacturing operation. The higher the yield, the greater the profitability of the operation. As circuits become more and more sophisticated, and correspondingly as the number of elements forming a semiconductor circuit increases, the likelihood of a circuit being defective increases. Consequently, the yield decreases and profitability decreases and/or the price of the more sophisticated semiconductor integrated circuits increases. Often, it is the failure of only a few of the tens or hundreds of thousands of circuit elements, that is, diodes, transistors, etc., within the integrated circuit that cause the entire circuit to be thrown away. Recently, however, with respect to very regular integrated circuits such as memories, integrated circuit manufacturers have included spare circuit elements on the semiconductor chip. Thus, when an integrated circuit is tested prior to encapsulation, a defective element can be replaced by severing the connection thereto and connecting a spare in its place. This process, called memory repair, improves the yield of complex semiconductor circuits dramatically, for example by a factor of two or three. The preferred method for severing connections in the integrated circuit is vaporization, that is, the fine conductors connecting the element to the rest of the circuit are vaporized using a focused laser beam. The conductors are usually several micrometers wide and fabricated of metal or polycrystalline silicon. The laser beam must be aimed wlth great precision at the semiconductor surface; and the focused spot must be very small, on the order of the width of the conductor being vaporized, so that adjacent conductors are not damaged. At present, the positioning accuracy and spot size required by integrated circuit manufacturers strain the capability of the equipment that is commercially available. As integrated circuit feature dimensions grow smaller, the strain upon present equipment will become even greater. In addition, the integrated circuit overall dimensions are increasing, which, as described below, yet further strain the capability of present commercial equipment. The commercially available equipment which can be employed for "repair" of the integrated circuit, to increase yield, falls into one of two general classes. In accordance with one class, the "X-Y beam positioner", a lens moves in an X-Y coordinate system over the surface of the integrated circuit. The lens must be positioned directly over the conductor to be severed during the vaporization process. A pair of mirrors are also provided. One mirror moves in one dimension (for example the X direction) only, and the other, fixed relative to the lens, moves in two dimensions (X and Y). Together, the mirrors direct a collimated beam from a fixed laser onto the movable lens. While this system provides high accuracy and small spot size, it is either relatively slow because of a high inertial weight required to minimize vibration of the moving parts; or the vibration resulting from a lower weight construction provides a limit on spot size which is insufficient for future integrated circuit repair applications (as discussed further below). A second class of commercially available equipment, designated the "galvo beam positioner", consists of a fixed lens and a pair of rotating mirrors which change the direction of the collimated light incident thereon from a fixed laser. The laser beam source is directed by the mirrors through the fixed lens toward the semiconductor surface at a designated and changeable angle controlled by the angular positions of the mirrors. The lens transforms the changing direction of light entering its optics into a changing position of a focused beam on the integrated circuit surface. In this commercial equipment, the mirrors are rotated, and hence positioned, through angles determined by the limited angular field of the lens, by galvanometer motors, often referred to as "galvos". While the galvo beam positioner is much faster than the X-Y beam positioner because it has much less inertia, and is vibration free, it does not have the necessary accuracy, spot size, or field of view required by those more sophisticated circuit configurations which have a relatively large surface area. These downside factors exist first because the angular position transducers are not accurate enough when their moment of inertia, and therefore their diameter, is small, and second, because of the demand placed upon lens design and manufacturing for a lens having a high ratio of field size to spot size, for example on the order of 1,000 to 10,000 or more. Thus, in the galvo beam positioner, small spot size conflicts with the large field of view required to cover the entire, and presently growing, size of integrated circuits. (The conflict exists because the angular position errors can be reduced by reducing lens focal length but reduced focal length conflicts with the requirement of a large field of view.) The X-Y positioner on the other hand, meets the needs of memory repair in that its field is large enough, and its spot size has adequate uniformity over the entire field. In addition, its accuracy is sufficient if the design is well engineered and the equipment is operated slowly enough to avoid positioning errors. It has however much more inertia than the galvo beam positioner and is therefore much slower in operation. Practically then, the throughput (repairs per hour) of the X-Y positioner is potentially much less than the galvo beam positioner. In order to minimize this disadvantage, the X-Y positioner is often designed to be lightweight. The lightweight design however reduces the positioner's rigidity and increases its vibration. Thus, while it becomes faster in settling within one limit of error (0.25 millimeters, for example), it becomes slower in settling within much lower limits, (for example, 0.001 millimeters), due to vibration. Thus at the lower limit, the vibration problem has the effect of limiting the effective positioning accuracy. Furthermore, vibration becomes even more of a limitation because no one has yet produced an X-Y beam positioner which is symmetric and dynamically balanced. On the other hand, it is relatively easier to mount mirrors to galvanometers so that the galvo beam positioner is dynamically balanced. The X-Y beam positioner also experiences vibration and other repeatability errors in the direction parallel to the optical axis of the lens (i.e. normal to the surface of the integrated circuit). This affects the focus of the laser on the integrated circuit surface. Thus, as the spot size at the surface becomes smaller, for a given wavelength of light, the depth of field decreases and vibration in the direction of the optical axis begins to cause variations in spot size so that the focus, in effect, varies. This sets a limit to the spot size presently obtainable with commercial X-Y beam positioners because those units now in use must be lightweight to compete, commercially (and with respect to speed), with the galvanometer systems. It furthermore turns out that the spot size limitation of commercial X-Y positioners is very close to the limit experienced by the present galvanometer beam positioners. In the galvo systems, however, the causes are completely different and include variation of spot size over the field of view. It is therefore a primary object of the invention to avoid the disadvantages of both the X-Y beam positioning system and the galvanometer beam positioning system while maintaining the advantages of both. Other objects of the invention are providing a method and apparatus for positioning a focused laser beam on an integrated circuit with very high accuracy, small spot size which is uniform over the entire field of the integrated circuit, larger field size than is presently obtainable with commercial galvanometer beam positioners, high speed, minimum vibration, and high reliability.
A technique for intraoperative bone scintigraphy. A report of 17 cases. Since 1981, intraoperative bone scanning has been used at Stanford University Hospital to assist in the localization and excision of skeletal lesions in the surgical suite. The utility of bone scans to detect lesions not otherwise "visible" is valuable in guiding the surgeon to the pathological site. In addition, intraoperative scanning can define the exact amount of tissue to be excised, averting excessive surgery near joints or along weight-bearing bones. Seventeen cases are presented.
Neutron reflectometry from poly (ethylene-glycol) brushes binding anti-PEG antibodies: evidence of ternary adsorption. Neutron reflectometry provides evidence of ternary protein adsorption within polyethylene glycol (PEG) brushes. Anti-PEG Immunoglobulin G antibodies (Abs) binding the methoxy terminated PEG chain segment specifically adsorb onto PEG brushes grafted to lipid monolayers on a solid support. The Abs adsorb at the outer edge of the brush. The thickness and density of the adsorbed Ab layer, as well as its distance from the grafting surface grow with increasing brush density. At high densities most of the protein is excluded from the brush. The results are consistent with an inverted "Y" configuration with the two FAB segments facing the brush. They suggest that increasing the grafting density favors narrowing of the angle between the FAB segments as well as overall orientation of the bound Abs perpendicular to the surface.
FALCON 9 ROCKET CHANDELIER Some of us are obsessed with Elon Must. Others, with NASA and intergalactic travel and finally some with 3D printing. Well thankfully, technology buffs can finally rejoice. Alas, the Falcon 9, 3D Printed Rocket Chandelier has arrived. In all honesty it doesnโ€™t look that bad. Instead of firing off highly explosive rocket fuel, these thrusters instead emit beams of light.
Union urges use of Rainy day fund The union representing state employees is countering a proposal to furlough state employees to save money. NAPE/AFSME Executive Director Julie Dake Abel tells us that such a proposal should be a last resort. Instead, she recommends,among other things, looking at the stateโ€™s rainy day fund. โ€œThere is a lot of money in that rainy day fund that certainly could be tapped into now as things are certainly raining for the state. A couple of other things is there are different agencies that could be combined. They Have overall administrative costs, meaning administration, some of the upper lever managementโ€ As lawmakers prepare for an expected special session to cut the budget some appropriatons committee members have proposed furloughing some state employees to save money. If budget cutting results in furlough, Nebraska would become the 22nd state to implement such methods.
Trends of performance in Arkansas cooperative beef-bull performance tests from 1962 through 1982. Individual records of performance of 2787 bulls in cooperative beef-bull performance tests from 1962 through 1982 were analyzed to determine trends in final weight, average daily gain, daily feed consumption and gross feed efficiency. Data from Angus, Polled Hereford, Charolais, Hereford and Santa Gertrudis breeds were analyzed separately. Performance-test procedures and the ration fed remained the same in all tests. Least-squares means by year, adjusted for location of test, initial weight, initial age and initial condition score, were regressed on time to determine average yearly change in the traits. Yearly changes for Angus, Polled Hereford, Charolais, Hereford and Santa Gertrudis breeds in final weight were 3.3, 1.6, 3.3, 2.5, and 2.6 kg, respectively; in average daily gain and yearly changes were .02, .01, .02, .02, and .02 kg, respectively; in daily feed consumption the yearly changes were .08, .03, .09, and .04 kg, respectively; and in feed per kg gain the yearly changes were -.08, -.04, -.05, -.03, and -.07 kg, respectively. Because the test procedures were the same in all years, these trends represent genetic progress made by the Arkansas beef industry toward faster-gaining, more efficient cattle.
Description Four Horses is a tiny independent self-publishing video game developer with a love for classic video games and portable gaming. Their aim is to make games inspired by the earliest generations of video games which can be enjoyed by anyone. History History Michael Waites founded Four Horses in November 2015 as a way to self publish Digger Dan DX on the Nintendo 3DS eShop. Michael started his career in the games industry as a tester but quickly moved into a project management role, though he was always a programmer at heart. Growing up through the earliest generations of home video games, he soon developed an interest in typing in programs from magazines and modifying them to understand how they work. This eventually led to the creation of Digger Dan as a homebrew programming exercise which evolved into a game fit to be published on as Nintendo DSiWare. Though the game was well received, Michael wanted to reach a bigger audience and upgraded the game to work on the Nintendo 3DS at the earliest opportunity. Finally, at the end of 2015, he was in a position to start Four Horses and hire an experienced freelance artist to create a fresh new look for the game. Logo & Icon There are currently no logos or icons available for Four Horses. Check back later for more or contact us for specific requests! Awards & Recognition Selected Articles "Digger Dan & Kaboom is definitely a case of substance over flash. The game's interface is a bit rough around the edges, the character design is so-so, and the visuals have a weird perspective that takes some getting used to. As far as gameplay is concerned, though, Digger Dan is top notch. With a plethora of brilliantly-designed levels that never cease to surprise and excite, and a simple, two-button control system, this is a great game that caters to a wide variety of audiences. Whether you just want to play for a few minutes on the bus or sit down for a lengthy play-session, Digger Dan is a blast."- Jacob Crites, Nintendo Life
/* * * Copyright (c) 2009, Microsoft Corporation. * * This program is free software; you can redistribute it and/or modify it * under the terms and conditions of the GNU General Public License, * version 2, as published by the Free Software Foundation. * * This program is distributed in the hope it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for * more details. * * You should have received a copy of the GNU General Public License along with * this program; if not, write to the Free Software Foundation, Inc., 59 Temple * Place - Suite 330, Boston, MA 02111-1307 USA. * * Authors: * Haiyang Zhang <haiyangz@microsoft.com> * Hank Janssen <hjanssen@microsoft.com> * */ #include <linux/module.h> #include <linux/init.h> #include <linux/types.h> #include <linux/mm.h> #include <linux/highmem.h> #include <linux/vmalloc.h> #include <linux/ioport.h> #include <linux/irq.h> #include <linux/interrupt.h> #include <linux/sched.h> #include <linux/wait.h> #include <linux/spinlock.h> #include <linux/workqueue.h> #include <linux/kernel.h> #include <linux/jiffies.h> #include <linux/delay.h> #include <linux/time.h> #include <linux/io.h> #include <linux/bitops.h> #include "osd.h" struct osd_callback_struct { struct work_struct work; void (*callback)(void *); void *data; }; void *osd_VirtualAllocExec(unsigned int size) { #ifdef __x86_64__ return __vmalloc(size, GFP_KERNEL, PAGE_KERNEL_EXEC); #else return __vmalloc(size, GFP_KERNEL, __pgprot(__PAGE_KERNEL & (~_PAGE_NX))); #endif } void *osd_PageAlloc(unsigned int count) { void *p; p = (void *)__get_free_pages(GFP_KERNEL, get_order(count * PAGE_SIZE)); if (p) memset(p, 0, count * PAGE_SIZE); return p; /* struct page* page = alloc_page(GFP_KERNEL|__GFP_ZERO); */ /* void *p; */ /* BUGBUG: We need to use kmap in case we are in HIMEM region */ /* p = page_address(page); */ /* if (p) memset(p, 0, PAGE_SIZE); */ /* return p; */ } EXPORT_SYMBOL_GPL(osd_PageAlloc); void osd_PageFree(void *page, unsigned int count) { free_pages((unsigned long)page, get_order(count * PAGE_SIZE)); /*struct page* p = virt_to_page(page); __free_page(p);*/ } EXPORT_SYMBOL_GPL(osd_PageFree); struct osd_waitevent *osd_WaitEventCreate(void) { struct osd_waitevent *wait = kmalloc(sizeof(struct osd_waitevent), GFP_KERNEL); if (!wait) return NULL; wait->condition = 0; init_waitqueue_head(&wait->event); return wait; } EXPORT_SYMBOL_GPL(osd_WaitEventCreate); void osd_WaitEventSet(struct osd_waitevent *waitEvent) { waitEvent->condition = 1; wake_up_interruptible(&waitEvent->event); } EXPORT_SYMBOL_GPL(osd_WaitEventSet); int osd_WaitEventWait(struct osd_waitevent *waitEvent) { int ret = 0; ret = wait_event_interruptible(waitEvent->event, waitEvent->condition); waitEvent->condition = 0; return ret; } EXPORT_SYMBOL_GPL(osd_WaitEventWait); int osd_WaitEventWaitEx(struct osd_waitevent *waitEvent, u32 TimeoutInMs) { int ret = 0; ret = wait_event_interruptible_timeout(waitEvent->event, waitEvent->condition, msecs_to_jiffies(TimeoutInMs)); waitEvent->condition = 0; return ret; } EXPORT_SYMBOL_GPL(osd_WaitEventWaitEx); static void osd_callback_work(struct work_struct *work) { struct osd_callback_struct *cb = container_of(work, struct osd_callback_struct, work); (cb->callback)(cb->data); kfree(cb); } int osd_schedule_callback(struct workqueue_struct *wq, void (*func)(void *), void *data) { struct osd_callback_struct *cb; cb = kmalloc(sizeof(*cb), GFP_KERNEL); if (!cb) { printk(KERN_ERR "unable to allocate memory in osd_schedule_callback\n"); return -1; } cb->callback = func; cb->data = data; INIT_WORK(&cb->work, osd_callback_work); return queue_work(wq, &cb->work); }
From Carbon-Free Home to Carbon-Free Office We recently had the pleasure of taking the battle against fossil fuels out of the home and into the workplace. The Abundance Foundation, a local non-profit focusing on all aspects of sustainability, was in cramped quarters with the space they share with Piedmont Biofuels, the fine folks down in Chatham County, North Carolina who are taking the waste stream of used restaurant oil and turning into a renewable fuel for our cars and trucks. So Abundance decided to venture out. Not too far, just into the yard, so they could still share the same kitchen, library, and other facilities they'd been using, but enough room to stretch their legs and contemplate the wide world of pepper varieties being grown by Doug Jones and the rest of the crew at Piedmont Biofarm. Using locally-milled wood and the skilled arms of Green Door Design, they built a modest 10'x12' office. Keeping true to their mission, they decided to build the Office of the Future, which, of course, will not run on fossil fuels. And this is when they invited us in to tinker around on their project. They wanted an off-grid PV system to make their computers and electronic gizmos whirr and hum, and some heat for the wintertime. The office will eventually get a backup biodiesel furnace, but for now it's got two south facing windows and a solar air heater, detailed instructions for which can be found here. To spread the knowledge, we did the installations as workshops. The solar air heater project was built mostly by Stephen's class at Durham Technical Community College, who recently expanded into the world of sustainability by kick-starting their green program. This is an awesome trend among community colleges around the country, who are embracing green jobs and starting to provide the opportunity for local folks to learn how to make a living for themselves and keep the planet alive at the same time. We ran both workshops on the same weekend in early November, which made for some slightly hectic crossings of scaffolding and ladders, and general running around like the proverbial headless chicken, but we got both things up and running by Sunday afternoon. We only had to go back once to fix things (so far, anyways)! With the world gathering in Copenhagen for one final chance at seriously addressing the accumulation of carbon dioxide and the resulting global climate disruption, it's good to know that the pieces of the puzzle are starting to be assembled. If we still had a functioning diesel, we could fill up at Piedmont Biofuels, and if we both worked at Abundance, we could go from our carbon-free home in a carbon-free car to a carbon-free office. There's still a long way to go to make this a reality for everyone (present company included), but it's satisfying to see sketches of what it looks like, and to be able to tell already that it has to look a hell of a lot better than the dying world we've got now.
Since the first US case of coronavirus disease 2019 (COVID-19) infection as identified in Washington State on January 20, 2020, more than 235 000 cases have been identified across the US in just over 2 months. Given the challenges in expanding testing capacity and the restrictive case definition of persons under investigation, the true number of cases is likely much higher. By March 17, the outbreak had expanded from several isolated clusters in Washington, New York, and California to all 50 states and the District of Columbia. As of April 2, there have been more than 5000 COVID-19โ€“associated deaths in the US. With a global total now of more than 1 million cases, the US is now the country with the largest number of reported cases, comprising about one-fifth of all reported infections. With community transmission firmly established, the US epidemic enters the exponential growth phase in which the number of new cases is proportional to the existing number of cases. This phase continues until either enough susceptible individuals become immune as a result of infection, stringent public health measures are followed, or both. Case Fatality A yet unanswered question that adds to uncertainty around the outbreak involves the case-fatality rate (CFR), defined as the percentage of deaths among all cases. Presently, global mortality is reported at 4.7% but this varies widely by location from a high of 10.8% in Italy to a low of 0.7% in Germany. Several factors influence the CFR including a reliable estimate of the total number of cases. Among the first 140 904 cases in the US, 1.7% died; however, given the uncertainty in the denominator, this is not a reliable CFR estimate. For example, the crude CFR in Wuhan, China, was reported to be 5.8% on February 1, whereas more methodologically robust estimates using novel methods to estimate the actual number of cases reported the CFR as 1.4%.1 In the coming weeks, surge capacity at US hospitals will influence the CFR. However, to have reliable estimates, better approximations of the overall population (denominator) are essential, and methods such as serosurveys using statistical sampling generalizable to the populations of interest will inform these estimates. New Clinical and Epidemiological Insights Is PCR Always Positive? What Is the Meaning of a Negative PCR? Several types of tests are being used to identify severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).2 These can be classified into 2 general categories: molecular diagnosis/polymerase chain reaction (PCR)โ€“based testing and serological testing. In clinical settings, PCR-based testing remains the primary method of identifying SARS-CoV-2. Given the lack of a reference standard for diagnosing COVID-19, the sensitivity and specificity of diagnostic testing are unknown. In addition, inadequate sample collection may reduce test sensitivity. In a study of 5 patients, individuals with chest computed tomography findings compatible with COVID-19, and a negative reverse transcriptase (RT)โ€“PCR result for SARS-CoV-2, tested positive on subsequent testing, suggesting that certain patients (eg, with compatible radiological findings) might require repeat testing with specimens collected from multiple sites in the respiratory tract.3 It is likely that lower respiratory samples (eg, minibronchial alveolar lavage) are more sensitive than a nasopharyngeal swab. Thus, it is important to emphasize that, depending on the clinical presentation, a negative RT-PCR result does not exclude COVID-19. Multiple serological tests are in various stages of development. With wider availability of serological testing, it will be possible to determine whether patients have a false-negative PCR result. Can Patients Become Reinfected? Reports from China and Japan have indicated that some patients with COVID-19 who were discharged from the hospital after a negative RT-PCR result were readmitted and subsequently tested positive on RT-PCR. It is unclear from the available information if these were true reinfections or the tests were falsely negative at the time of initial discharge. However, while other coronaviruses demonstrate evidence of reinfection, this usually does not happen for many months or years. Therefore, it is unlikely that these were true cases of reinfection. Some reassuring evidence comes from a challenge study among rhesus macaques.4 After initial challenge and clearance of SARS-CoV-2, the animals were rechallenged with the virus but were not infected. While the evidence on reinfection is evolving, current data and experience from previous viruses without substantial seasonal mutation do not support this hypothesis. How Long Does Immunity Last? Presently, there is no validated immune correlate of protection for SARS-CoV-2, ie, antibody level or another immunological marker associated with protection from infection or disease. However, in a study that included 82 confirmed and 58 probable cases of COVID-19 from China, the median duration of IgM detection was 5 days (interquartile range, 3-6), while IgG was detected at a median of 14 days (interquartile range, 10-18) after symptom onset.5 Because the outbreak is only a few months old, there are no data on long-term immune response. Data from SARS-CoV-1 indicate that titers of IgG and neutralizing antibodies peaked at 4 months after infection, with a subsequent decline through at least 3 years after infection. Should Everyone Wear a Mask in Public? Current guidelines from the Centers for Disease Control and Prevention (CDC) do not recommend routine use of medical masks among healthy individuals and suggest limiting mask use to health care workers and those caring for patients with COVID-19. However, this guidance is likely to be modified. Regardless, any change in policy should prioritize the availability of masks for health care workers. Priority should also be given to others with risk of exposure such as first responders and incarcerated individuals. Due to the current scarcity of masks, many in the community have begun sewing masks for themselves and for health care workers. A fitted N95 respirator is the preferred type of medical mask for health care workers; however, supplies in the US are very limited. Medical masks are also recommended for symptomatic individuals to prevent them from transmitting the virus. The rationale supporting the recommendations comes from studies finding limited to no efficacy of masks in protecting healthy individuals from influenza infection and also for the need to preserve supplies. However, evidence from influenza studies might not be relevant for COVID-19. For example, in a systematic review, masks, particularly combined with other measures such as handwashing, were found to be effective in preventing SARS-CoV-1 infection.6 Moreover, with the increasing evidence of presymptomatic transmission of SARS-CoV-2, there might be value in the use of masks among individuals at risk of transmission.7 How Does SARS-CoV-2 Spread? Current evidence suggests that SARS-CoV-2 is primarily transmitted through droplets (particles 5-10 ฮผm in size). Person-to-person transmission occurs when an individual with the infection emits droplets containing virus particles while coughing, sneezing, and talking. These droplets land on the respiratory mucosa or conjunctiva of another person, usually within a distance of 6 ft (1.8 m) but perhaps farther.8 The droplets can also settle on stationary or movable objects and can be transferred to another person when they come in contact with these fomites. Survival of the virus on innate surfaces has been an important topic of discussion. While there are few data, the available evidence suggests that the virus can remain infectious on inanimate surfaces at room temperature for up to 9 days. This time is shorter at temperatures greater than 30ยฐ C. The good news is that cleaning and disinfection are effective in decreasing contamination of surfaces, emphasizing the importance of high-touch areas.9 Transmission through aerosols, particles smaller than 5 ฮผm, can also occur under specific circumstances such as endotracheal intubation, bronchoscopy, suctioning, turning the patient to the prone position, or disconnecting the patient from the ventilator. Cardiopulmonary resuscitation is another important aerosol-generating procedure. In a recent study of environmental sampling of rooms of patients with COVID-19, many commonly used items as well as air samples had evidence of viral contamination.10 In the context of the heterogeneity in evidence and possibility of aerosolization of the virus during certain medical procedures, public health agencies (including the CDC) recommend airborne precautions in situations involving patients with COVID-19. When Can Social Distancing Measures Be Lifted? With the exponential increase in US COVID-19 cases and deaths, several jurisdictions have implemented social-distancing measures. Modeling and empirical studies suggest that social-distancing measures can help reduce the overall number of infections and help spread out cases over a longer period of time, thus allowing health systems to better manage the surge of additional patients. However, long-term social distancing can have detrimental effects on physical and mental health outcomes as well as the economy. A few changes may allow for easing restrictions: First, an aggressive program of testing to identify asymptomatic and mild cases combined with proactive contact tracing and early isolation as well as quarantine of contacts. Second, there must be a focus on reducing home-based transmission. In Wuhan, particularly after the initial phase, most transmissions occurred within households. While the CDC has published guidelines for preventing household transmission, it did not place enough emphasis on the importance of having the infected person always wear a mask. Third, even a treatment that only shortens an intensive care unit stay by 20% to 30% can have a substantial benefit on health system capacity. When Will a Vaccine Be Available? The ultimate strategy for controlling this pandemic will depend on a safe and efficacious vaccine against SARS-CoV-2. However, only 3 vaccine candidates are currently in phase 1 human trials: a messenger RNA vaccine and 2 adenovirus vector-based vaccines. The estimated timeline for availability of an initial vaccine is between early and mid-2021. Conclusions As the COVID-19 outbreak expands in the US, overall understanding of this disease has increased, with more information available now than even a few weeks ago. However, more evidence is needed, particularly for public health and clinical interventions to successfully prevent and treat infections. Even during a pandemic, obtaining rigorous, reliable data is not a distraction, rather it is essential for accurately measuring the extent and severity of COVID-19 and assessing the effectiveness of the response. Back to top Article Information Corresponding Author: Carlos del Rio, MD, Emory University School of Medicine, 49 Jesse Hill Jr Dr SE, FOB Room 201, Atlanta, GA 30303 (cdelrio@emory.edu). Published Online: April 6, 2020. doi:10.1001/jama.2020.5788 Conflict of Interest Disclosures: Dr del Rio reported receiving grants from the National Institutes of Health/National Institute of Allergy and Infectious Diseases. No other disclosures were reported.
Sunday Morning Sunday Morning Norway's Slow TV: Fascinating viewers for hours or days at a time One axiom rarely observed on television nowadays is to "take it slow." And if you think that can't make for gripping TV, Seth Doane wants to tell you there's an entire country that would pointedly disagree: It's television's version of taking a deep breath โ€ฆ a very long, very slow, deep breath. All aboard! Viewers shared a seven-hour train ride through Norway on the first installment of NRK's "Slow TV." NRK2 Rune Moklebust and Thomas Hellum are the brains behind the whole thing. "Did you know where this journey would lead, how successful it would be?" asked Doane. "No idea at all," said Moklebust. "It's normally one of those ideas you get late night after a couple of beers in the bar, and when you wake up the next day, Ahh, it's not a good idea after all." But much to their surprise, there was a green light from their bosses at Norway's public broadcaster NRK2. "We actually like it being a bit strange and a bit crazy, because then it's more fun," said Moklebust. Hellum added, "If the viewers laugh, or think, Wow, this is too crazy, that's basically the kind of reaction you really want from the viewers." About a quarter of all Norwegians tuned in to watch some part of that train trip. They ran historical clips when the train went through a tunnel, but other than some music, there was no narration, no plot, and -- thanks to public broadcasting -- no commercials. Hellum admits his own show is boring. "Yes!" he laughed. "Much of life itself is boring. But in-between, there are some exciting moments, and you just have to wait for them." A scene from "National Firewood Night." NRK2 Since the train, in 2009, they've experimented with other slow ideas, and folks at all levels have taken notice. "I understand that in Norway, for example, one of the big hits on TV is National Firewood Night!" President Obama said at a State Dinner for the heads of five Nordic countries in May 2016. "This is true! Video of logs burning for hours." Twelve hours in all! A "National Knitting Night" started, of course, with shearing the sheep; knitting the sweater came much later in the 13-hour broadcast. The shows, Doane noted, "get slower, and slower, and slower." "National Knitting Night" on NRK2's "Slow TV." NRK2 "Well, it has to be unique -- not a copy of the last one," said Moklebust. "So we have to push the boundaries for each show, I think." So, is there a recipe for the perfect "Slow TV"? "It's important that it's an unbroken timeline, that you don't take away anything," said Hellum. "It's all the boring stuff in there, all the exciting things in there, so you as a viewer has to find out what's boring and what's interesting." "It kind of requires you precisely to slow down, to kind of twist your head in a little bit of a different direction," said Espen Ytreberg, a professor of media studies at the University of Oslo. Ytreberg said when he first heard about "Slow TV," he thought "the whole notion was weird, to tell you the truth. But it turned out that at least some of it I found surprisingly appealing." Ytreberg likens Slow TV to opening a sort of window -- an escape valve -- from what he calls fast-paced, "eye-candy" TV. "When did we come to accept that television should be this accelerated, busy, intense, in-your-face-thing?" he said. "At some point, that became the norm." Rune Moklebust thinks one image sums up their approach: "Once we passed a cow on one of our journeys, and we put a camera on it. And the camera just kept rolling, and we didn't cut away. And then you keep it, and you keep it, and then you keep it, and then, suddenly a story evolves: What is this cow doing? Why is it walking there? Where is it heading? Why is the cow alone? "So suddenly, there comes a story out of it, and you have to see what happens." There was plenty of time to follow that cow, because they came across it while shooting an episode, which followed a cruise along Norway's coast.
Q: create C structs using a variable as filename is it possible to do say int filename = 0; typedef struct{ char name; char sname; int number; }foo; foo filename; filename++; foo filename; and have a new foo struct named 1 and another named 2? A: C isn't interpreted language, so you can't create variable names runtime. The other way is to create array having multiple instances. typedef struct{ char name; char sname; int number; }foo; foo *files = malloc(sizeof(foo)*3); files[0].name = "A"; files[1].name = "B"; files[2].name = "C"; Edit: used malloc instad new foo[3]
Developed by the open-source community since 1998 LAME, (Lame Ain't an MP3 Encoder) is the Hydrogenaudio recommended MP3 encoder. LAME outperforms all commercial MP3 encoders and other free ones. Key features- Encoding and decoding improvement: thanks to its speed over ISO reference software, its MPEG1,2 and 2.5 layer III encoding as well as its CBR (constant bitrate) and two types of variable bitrate, VBR and ABR. Moreover, encoding engine can be compiled as a shared library. It can do the encoding faster than real time on a PII 266 at highest quality mode. - Modification and changing: the editor has changed some features for their improvement. One of these changes is the filter registration information so that the MP3 audio subtype can be correctly reported and being supported on the encoder output pin. Thanks to that, the third-party encoding applications can use the DirectShow IFilterMapper2 Interface so as to recognize that the LAME encoder supports MP3 output. - The Filter Merit Value that was being used when the filter was registered has been altered so that it is now using the standard DirectShow compressor filter merit value of MERIT_DO_NOT_USE (0x200000). In the previous version, the filter was being registered using a value of MERIT_SW_COMPRESSOR (0x100000), which is for sure a worse priority than MERIT_DO_NOT_USE. This has been changed to prevent the LAME Encoder filter from being selected for use by some third-party encoding applications. System requirements- 2000, Windows Xp, Windows Vista, Windows 7 Pros- Autotools: Fix compilation on alpha using proper ifdef guards. Thanks to Andres Mejia. - Small correction of the documentation. Cons- Need to pay a license fee in some countries so as to legally encode MP3 files
TORONTO โ€” It appears that Kevin Elliott, Tori Gurley and Vidal Hazelton all have new homes. Plus, will Andrew Harris be back in time to take on his former team? That and much, much more in the latest Checking Down: Jump to team: BC LIONS โ€“ Itโ€™s unlikely Shawn Gore will suit up for the Lions this week, which could mean rookie Shaq Johnson makes it onto the roster (Farhan Lalji, TSN). โ€“ If Gore doesnโ€™t play, Stephen Adekolu could be in line for a chance to contribute (Cam Tucker, Vancouver Province). EDMONTON ESKIMOS โ€“ Mike Reilly says itโ€™s too early to think about which route the Eskimos will take in the playoffs and the focus should be on winning (Gerry Moddejonge, Edmonton Journal). โ€“ The Edmonton Eskimos have reportedly signed recently-released receiver Vidal Hazelton (Justin Dunk, 3 Down Nation). CALGARY STAMPEDERS โ€“ While the Stamps eye a record-breaking season, a season-ending bye looms as a potential playoff obstacle as Calgary could have two weeks off between games (Kirk Penton, Postmedia Network). SASKATCHEWAN ROUGHRIDERS โ€“ The Saskatchewan Roughriders have officially signed national linebacker Henoc Muamba. โ€“ While the Riders aim for their third straight win this weekend, Chris Jones says heโ€™s proud of the growth his team has shown (Riderville.com). โ€“ The new Mosaic Stadium saw its first live action with a CIS game over the weekend, and so far so good (Brian Fitzpatrick, Regina Leader-Post). โ€“ Muamba is expected to make his Riders debut against Ottawa this weekend along with two other new faces in Willie Jefferson and Jeff Fuller (Jamie Nye, CFL.ca). WINNIPEG BLUE BOMBERS โ€“ The Bombers have signed recently-released receiver Tori Gurley (CFL.ca). โ€“ Andrew Harris took first-team reps on Thursday and is expected to be a full-go vs. his former team (Darrin Bauming, TSN 1290). HAMILTON TIGER-CATS โ€“ Chad Owens is expected to miss the rest of the 2016 season with a broken foot (Drew Edwards, 3 Down Nation). โ€“ With the sudden rash of injuries, could the Ticats be in on one of the receivers recently released by the Argos? (Gary Lawless, TSN). โ€“ Just a few days after being released from the Argos, is Kevin Elliott headed to the Hammer? (Drew Edwards, 3 Down Nation). TORONTO ARGONAUTS โ€“ What does the future hold for Ricky Ray? More on the Argosโ€™ decision to roll with Drew Willy from this time forward (Gary Lawless, TSN). โ€“ There are many reasons the Argos released their โ€˜big threeโ€™ receivers, but the biggest? Scott Milanovich believes their replacements can be just as productive (CFL.ca). OTTAWA REDBLACKS โ€“ Let the speculation begin with some big names hitting the market this week, but donโ€™t expect the REDBLACKS to add any of them (Tim Baines, Ottawa Sun). โ€“ The REDBLACKS have released veteran receiver Khalil Paden. โ€“ In the latest from The Waggle, Trevor Harris talks about what brought him to Ottawa and his Grey Cup hopes for this season (CFL.ca). MONTREAL ALOUETTES โ€“ The Alouettes have a new head coach but thereโ€™s still uncertainty surrounding the future, specifically as it pertains to General Manager Jim Popp (Herb Zurkowsky, Montreal Gazette).
Materials scientists have created a substance called 'new diamond.' It has the hardness of diamonds, without one of their very significant weaknesses. Unfortunately, the 'new diamonds' don't have their advantageous sparkle either. Learn about the strength behind diamond's 'ugly stepsisters.' Diamond is one of the hardest substances in the world. It's able to withstand scratching and pressure and incredible shocks, or at least that's what we've been told. Although diamonds have been held up as an amalgamation of strength and beauty, their beauty is actually their weakness. They're crystals. A crystal is formed when atoms, in this case carbon atoms, are locked into a repeating structure. The structure is patterned, but it is not symmetrical in every direction. Hit the structure one way and it's hard as, well, a diamond. Hit it another way and it has only a small fraction of the hardness. Enter 'new diamond.' This material is created from a material first made in the 1950s called 'glassy carbon.' Glassy carbon is a hard, difficult-to-melt, glass-like substance that's often used because of its durability. Compress that to 400,000 times atmospheric pressure and you create the dark, slightly shiny material of new diamond. It can take 1.3 million times atmospheric pressure, a feat only matched by diamonds. Unlike diamonds, new diamond is not a crystalline structure but an amorphous material - aka a blob. Since its molecules are oriented any which way, they can be hit any which way and stand up to the same pressure. And what will scientists do with this new wonder material? The first idea is to make new diamond into an anvil, on which they can make even denser and harder materials.
Despite the brief awakening offered by the Occupy movement, there really has been very little discontent with how our rulers have dealt with the aftermath of the global financial crisis in places like the US, UK and Canada. As the so-called neoliberal heartland, we might expect that people in these countries would go through the harshest of soul-searching following the collapse of our much-vaunted, although entirely delusional, free-market system. Grumbling and carping aside, people have not taken to the streets, unlike in Spain or Greece, nor have people really mobilized to protest the mass looting of our taxes by the banks and other financial institutions. Maybe we got panicked into consenting to whatever our political and corporate overlords thought best for our futures. I think the reason is very close to home โ€” pun very much intended. It basically has to do with the pacifying effects of homeownership. Most people in the neoliberal heartland โ€” and beyond โ€” are deeply invested in the present financial order, at a personal and often visceral level. Since the 1970s we have become increasingly financially dependent, to a worrying extent, on the combination of rising homeownership and, more importantly, rising house prices. On the one hand, homeownership rates in the US, UK and Canada had been rising for most of the twentieth century, but reached almost 70 percent as the financial markets started to collapse in 2007-โ€™08. Even today, this rate remains around or above 65 percent in these countries, despite the shake-out โ€” or should that be shake-down โ€” of sub-prime borrowers. On the other hand, and during the same period, house prices rocketed, almost doubling since the 1990s, for example. So while homeownership rates rose modestly since the 1970s โ€” if slightly more sharply in the UK โ€” falling interest rates and rising house prices have meant that we have all splurged on borrowed money. All three countries have experienced a major rise in personal debt as a result. Now, homeownership rates are similar or higher in countries like Greece, Spain, Ireland and Portugal โ€” so what is the difference? These countries experienced larger house price bubbles than the US, UK and Canada, and, consequently, their populations lost far more wealth (i.e., housing equity) as a result of the crash. This has been compounded by subsequent rises in unemployment and crippling austerity policies. Ultimately people in these countries now have much less to lose by taking to the streets. So a less palatable reason for our acquiescence in the neoliberal heartland is that for many people in the neoliberal heartland, especially of a certain generation (older, outright homeowners basically), the global financial crisis has had a limited impact in many ways. Anglo-American countries are still sitting pretty, at least somewhat, for now. However, for the next generation of workers and potential homeowners things are not so bright, and this represents a potential threat to the future of our moribund economic system. Itโ€™s a threat we can use if we so choose, as Iโ€™ll explain below. Well before the global financial crisis revealed house prices as the soft underbelly of contemporary, finance-driven capitalism, housing had come to underpin our economic livelihoods and personal lifestyles. The rise of homeownership and house prices since the 1970s necessitated that most people buy into the โ€˜housing dreamโ€™, both economically and culturally. Our own house: we are expected to want it. We are expected to get on the property ladder as soon as we can afford it. We are expected to mow our laws, mend our fences, and defend our property whatever the cost. And most people in Anglo-American societies do just that โ€” as those 70 percent homeownership rates illustrate. Economically, this dream has played out through deliberate political strategies by governments to promote homeownership as a way to pacify populations. David Harvey has argued that American suburbanization โ€œaltered the political landscape, as subsidized homeownership for the middle classes changed the focus of community action towards the defense of property values and individualized identities, turning the suburban vote towards conservative republicanism.โ€ Culturally, it has played out through the constant bombardment of our senses with various forms of housing porn โ€” TV shows about selling houses, renovating them, moving between them, upsizing, downsizing, ad absurdum. Thankfully, these TV shows have since quietly slunk from our screens, although there is still evidence of their alluring power (see the image above). Why is all of this a problem? While on the surface things may seem pretty, below it they are far uglier. Basically, homeownership both entrenches and hides a number of deep-seated and unyielding problems we face should we want to change our societies. First, and perhaps most important, homeowners โ€” and pretty much everyone else in society โ€” have become increasingly dependent on continually rising house prices in order to maintain their living standards. Real wages, by which I mean wages adjusted for inflation, have stagnated for most people since the 1970s in countries like the US, UK and Canada. Moreover, the austerity aftermath of the global financial crisis has simply reinforced this trend with many people yet to return to pre-crisis earnings. It is understandable that people put so much of their hope in rising house prices (see Graph 1), and so much effort in adding value to their homes (like renovations). Rising house prices have supplanted rising wages for many people, meaning that most people have become dependent on their house to finance things like their own lifestyles, their childrenโ€™s education and future, medical bills, and so on. Graph: Average UK House Price (left) and Average House Annual Price Change (right) (1952-2010). Source: reproduced from Birch, K. (2015) We Have Never Been Neoliberal, Zer0 Books; data on UK House Prices Since 1952 from Nationwide Building Society. Second, and most perniciously, this dependence on housing and rising house prices breeds complicity with fiscal conservatism, encouraging and supporting the continuing stagnation of real wages, which the non-home-owning 30 percent of the population (the poorest) rely upon for their survival (as do many others). Homeowners are turned into fiscal conservatives through their fear of rising wages, since such inflationary pressures might erode the value of their houses. As pernicious, corporate strategies to enroll sub-prime borrowers (again, the poorest in society) in the homeownership dream โ€” or delusion โ€” involved horrific forms of predatory lending, all of which has been well documented by the financial journalist Matt Taibbi. While some people were rudely ejected from the home-owning dream, itโ€™s not really surprising that the vast majority do not support any truly radical break with the past. Many remain bound, clinging desperately to the same imperative as before: ever-rising house prices. As a result, inequality is increasingly entrenched across the generations, with parents jockeying with one another to buy housing in the best school districts, lending their kids money for down payments, and so on. Attempts to break this cycle and create greater equity are treated to screaming, braying headlines in right-wing newspapers like The Daily Mail. Meanwhile government-imposed austerity policies reinforce this generational divide as the youngest are hit hardest. Finally, the expansion of homeownership created an enormous economic boom in a number of countries around the world, most of which experienced a (brief or otherwise) hiccup as the global markets came crashing down in 2007-โ€™08. Since then, it is particularly noticeable that house prices in the epicenters of greed like New York, London and Toronto have shot up again โ€” and look like they wonโ€™t be coming down any time soon. Even that stalwart mouthpiece of free market capitalism, The Economist, has called London โ€œThe bubble that never burst.โ€ Continuously rising house prices, on which we all now depend, are themselves dependent on either selling off our housing stock to overseas investors โ€” already happening โ€” and the creation of โ€œgeneration rent,โ€ or enticing young, first-time buyers to put their hopes and dreams in the same Ponzi scheme as their parents. And Iโ€™m not calling homeownership a Ponzi scheme as some sort metaphor; itโ€™s the very reason why Alan Walks refers to contemporary capitalism as โ€œPonzi neoliberalism.โ€ Promises of ever-rising house prices are premised on always being able to enroll new buyers in the property market with the promise of ever-rising house prices, and then letting the cycle start again. Without this promise โ€” just like any other Ponzi scheme โ€” housing would collapse and quickly lose its luster. But, and itโ€™s a big but, first-time buyers wanting to jump aboard the Ponzi train now face a twofold dilemma. On the one hand, since house prices are still rising in cities like New York, London and Toronto โ€” the great centers of work, basically โ€” they (or you) have to move to increasingly isolated and ostracized suburbs. Parts of the UK, for example, are now off limits to most people as average house prices have rocketed to many times average salaries: 14.9 times in Oxford, 13.9 times in London, 12.7 times in Cambridge, 10.9 times in Brighton, and so on. On the other hand, however, since house prices remain stagnant in many other parts of these countries, itโ€™s become incredibly difficult to find anyone willing to sell their property: thereโ€™s just too much risk of negative equity. What I find particularly egregious about all this is that we are still supposed to buy into the โ€˜aspirationโ€™ of homeownership โ€” we are not even considered proper adults or citizens in many cases until we have a mortgage weighing us down. Almost everyone still assumes you will buy a house someday (if you can afford it, that is). Itโ€™s such an entrenched assumption that people rarely frame it as a suggestion โ€” โ€œyou should buyโ€ โ€” but simply a statement of fact โ€” โ€œwhen will you buy.โ€ Even amongst (supposedly) leftist scholars and activists the assumption holds: no-one wants to miss out on the golden property ladder it seems, even if property is still theft. What this collective delusion hides is that our grandparents, parents, friends and families are actually desperate for us to join their Ponzi scheme, for without our own desperate scrabbling for somewhere โ€” anywhere! โ€” to call our own, the value of their houses starts to tremble, crumble, and then fall. This brings me back to my point at the start: a significant proportion of the population have bought into the rise of neoliberalism, or whatever we want to call the transformation of our societies since the 1970s. Can this state of affairs continue? Likely not, especially without the significant and continuous personal indebtedness of future generations. Who can find a house at four times their salary nowadays? That means that new buyers have to go into far more debt than their parents, in order to get less in terms of quality and yet face more risk. Is this just a new, improved form of pacification staring down the barrel of a mortgage? What can we, or you, do about it? Now represents an ideal time to turn your back on it all. You can just not agree to pay all that interest on a house; you can stop financing previous generations through your ballooning debt; you can stop pushing up the value of their homes and their living standards; you can stop buying into the delusion of homeownership. We live in a strange society when one of the most radical decisions you can make is not to buy a house, but maybe thatโ€™s precisely whatโ€™s needed. At least itโ€™s a good start.
Q: wxPython GUI killed by enthought I am writing a wxPython GUI. For certain functionality this requires that I use the Enthought distribution of python, but when I upgraded to Canopy it completely breaks my GUI. When I call up a certain window, everything freezes and I have to force quit. I don't get any kind of error message or traceback, just a freeze. I am using the 64 bit Canopy, version 1.4.0.1938, and wxPython 2.9.2.4. I am looking for either of two kinds of advice. 1. What is a good debugging protocol in this kind of situation? 2. How to get wxPython and Canopy to play nice? I greatly appreciate any suggestions. I am happy to include any code that might be helpful, but I suspect that this is not particularly specific to my code. edit: I need the Enthought distribution specifically because my GUI builds on older code that uses some of the data analysis and plotting that EPD provides. This GUI actually incorporates and streamlines several older GUIs for analyzing paleomagnetic data. A: (Edited 5/6/2014) My temporary solution was installing the 32 bit version of Canopy and then using the Package Manager to install an older version of wxPython (2.8.10.1). This worked temporarily, until I was able to get to the actual cause of the error. https://support.enthought.com/entries/22601196-wxPython?page=1#post_22146884 The real problem did indeed turn out to be the version of wxPython. The specific issue was calling ShowModal() instead of simply Show() for the main window of the GUI, which apparently worked in wxPython 2.8 but caused a freeze in 2.9. The code now thankfully works with 64 bit Canopy. Thanks to Jonathan March for pointing me in the right direction.
[Search of novel bioactive natural products from plant sources--novel structures and biological activities--]. Over 30 years, our laboratory has been involved in the search of bioactive natural products from plant sources of several plant families, Rutaceae, Guttiferae, Avicenniaceae, and so on. In this review, novel structures of acridone alkaloids, carbazole alkaloids, coumarins, depsidones, and so on isolated in our laboratory will be showed. In addition, some results of assay of biological activities of the isolated compounds also will be described.
Join the Lindy's color revolution! Mixed Media Tags using new Magicals | Video tutorial with Svetlana Hi, my dear friends! Today is a very good day because I could play with my new Magicals from Alexandraโ€™s Artists set . I decided to make a couple of tags and present them to my crafty friends. I made very simple but effective backgrounds and added some embellishments. After taking video I had another look at my tags and glued some small extra flowers and crystals. I usually do the same โ€“ make my project in the night and make the last decoration only in the morning with fresh eyes. I began with covering the cardstock with white gesso and used pages from an old book to make my background more complex. Then I added some crackle paste to let the Magicals flow into the crackles. I created the background for the butterfly in the same way: glued old bookโ€™s page on the cardstock and covered with a thin layer of white gesso. To make my tags more interesting I stamped several branches on the watercolor paper with gray archival inks. I did it in advance cause I love making fussy cuts while watching different series, itโ€™s a kind of evening relaxation for me!