text
stringlengths
0
6.48M
meta
dict
Cell-based protein vaccines for influenza. An overview of influenza vaccines in development is provided, with an emphasis on new cell-based protein vaccine candidates. The current licensed vaccine is a cost-effective means to reduce the impact of influenza with a known mechanism of action. Most vaccine companies are focusing on the production of whole influenza viruses in various cell lines to replace egg-based manufacturing technology. Only a limited number of targets have been identified for the development of cell-based protein influenza vaccines. They include hemagglutinin, neuraminidase, M2 and nucleocapsid protein. These protein-based vaccine candidates are discussed, along with their progress in clinical development and the advantages and disadvantages of each vaccine approach.
{ "pile_set_name": "PubMed Abstracts" }
[Laparoscopic versus conventional appendectomy--a comparison with reference to early postoperative complications]. To compare the complications of laparoscopic appendectomy (LA) and conventional appendectomy (CA) 930 consecutive patients from 1989 until 1997 were analysed retrospectively. Conventional appendectomy was performed in 330 patients, laparoscopic in 554 patients and another 46 patients required conversion after laparoscopy. The groups were similar in sex ratio, age and degree of inflammation. Postoperative complications occurred in 8.78%. There were less complications in the LA-group (4.69%) than in the CA-group (13.33%) (p < 0.01), especially wound infections were found less in the LA-group (1.8% vs. 11.21%, p < 0.01). The incidence of intraabdominal abscesses was similar in the LA and CA group (1.44% vs. 1.52%). The differences between the groups are not influenced by complicating appendicitis (perforation or abscess). Systemic complications were similar for LA and CA (0.72% and 0.61%), but were seen more often after conversion (6.52%, p < 0.01). This retrospective analysis shows that laparoscopic appendectomy significantly reduces postoperative complications, especially wound infections. The authors consider laparoscopic appendectomy to be the procedure of choice in patients with acute appendicitis.
{ "pile_set_name": "PubMed Abstracts" }
Q: Django and simple Ajax w/ jquery I've met a huge problem with my django. I just want to call a view when document is ready and pass a value back to js and its alert function: I've read a lot of here stack's solutions, some articles, jquery docs, implemented a lot of samples - nothing worked. <script type="text/javascript"> $(document).ready(function() { $.get("/resultlive/", function(response) { //$( ".loading-progress-6" ).css( 'width: ' response ); alert(response); }); }); </script> and the resultlive view is: def resultlive(request): task = Task.objects.get(id=1) data= task.done json_data = json.dumps(data) return HttpResponse(json_data, mimetype='application/json') where work-progress is simple (and only): {{percentageDone}} on localhost/resultlive I am getting proper result but on the whole template-page console says: error 500 internal server error with jquery.extend What I need to do? I need to automatically refresh the task done value and change the width of progress bar without refreshing a webpage I spent 5 hours and nothing worked, please for some suggestions for jquery very newbie EDIT: Okay, I've done new Sample project specially to test responses, I've got view with func: def ajax(request): pcs = Workstation.objects.get(id=1) response_data = {} try: response_data['result'] = 'Success' response_data['message'] = list(pcs) except: response_data['result'] = 'Fail' response_data['message'] = 'fail' return HttpResponse(json.dumps(response_data), content_type="application/json") and JavaScript call after href click: <script type = 'text/javascript'> function FireScript(){ $.ajax({type:'GET', url: '/ajax/', datatype: 'json', async: true, data: {}, success: function(json) {alert(json.message);} }); } </script> And it does work well, but when I change in view: response_data['message'] = list(pcs) changed to: response_data['message'] = pcs.processes_done Console throws error 500, why? at Python it works very well. EDIT2: This one guy retrieves only one row (as expected) and returns two vars to view by using AJAX, gosh... finally def ajax(request): pcs = Workstation.objects.get(id=1) response_data = {} try: response_data['result'] = 'Success' response_data['message'] = str(pcs.processes_made) #serializers.serialize("json", pcs) for more objects except: response_data['result'] = 'Fail' response_data['message'] = 'fail' return HttpResponse(json.dumps(response_data), content_type="application/json") A: Try this in your AJAX view: from django.views.decorators.csrf import csrf_exempt @csrf_exempt def ajax(request): pcs = Workstation.objects.get(id=1) response_data = {} try: response_data['result'] = 'Success' response_data['message'] = list(pcs) except: response_data['result'] = 'Fail' response_data['message'] = 'fail' return HttpResponse(json.dumps(response_data), content_type="application/json") The only change I add is the @csrf_exempt and the import of csrf_exempt from django.views.decorators.csrf import csrf_exempt You could check Django Cross Site Protection. Is Django documentation about cross site protection and csrf tokens.
{ "pile_set_name": "StackExchange" }
--- author: - 'Gilbert Moultaka [^1]' title: The dark matter as a light gravitino --- Introduction ============ In some instances, the requirement that supersymmetric particle dark matter scenarios aught to be the simplest to handle cosmologically and the least model-dependent, seems occasionally to take over the more fundamental question for supersymmetry, namely the origin of supersymmetry breaking. Since there is to date no particularly compelling susy breaking mechanism/model to be preferred to all the others, one should also consider the dark matter issue from a particle physics standpoint which offers different classes of susy breaking mechanisms, irrespective of whether the ensuing cosmological context is “simple" or not. Recent developments [@ISS], [@murayama] stressing the existence of metastable susy breaking vacua, have renewed the interest in gauge-mediated susy breaking (GMSB) scenarios opening new possibilities for the model-building [@GR99], and appear to be very interesting from the early Universe point of view as well [@AK]. On the other hand, the gravitational interactions which play a minor role for susy breaking in GMSB models remain physically relevant through the coupling to supergravity, at least in order to absorb the unphysical goldstino component, to adjust the cosmological constant to a small value and to avoid a massless R-axion. In such scenarios where supersymmetry breaking is triggered by non-perturbative dynamics of some (secluded) gauge sector and communicated to the MSSM by a messenger sector through perturbative gauge interactions, the susy breaking scale $\sqrt{F}$ and the mass scale $\Lambda$ of the secluded gauge sector can be well below the Planck scale. Moreover, if these two scales combine to trigger the electroweak symmetry breaking yielding $G_F^{-1/2} \sim (\alpha/4 \pi) k {F / \Lambda } $, where $G_F$ is Fermi’s constant (and $0< k \le 1$ measures the secludedness of the secluded sector), then the gravitino mass $m_{3/2} \simeq F/(\sqrt{3} m_{\rm Pl}) \sim \left(4 \pi/ \alpha)(\Lambda /\sqrt{3} k m_{\rm Pl}\right) G_F^{-1/2}$ where $m_{\rm Pl}$ is the reduced Planck mass, is expected to be very small (${\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}{\cal O}(1)$ GeV) and is the lightest supersymmetric particle (LSP). The question then arises as to which particle can be a good candidate for the cold dark matter (CDM) in this case? To answer this question requires an unconventional treatment as compared to the Neutralino “vanilla” candidate or even to the heavy gravitino candidate in the context of gravity mediated susy breaking models. Indeed, in contrast with the latter where the hidden sector is typically too heavy to be produced at the end of inflation, the secluded and messenger sectors of GMSB provide stable particles that may be present in the early Universe for a sufficiently heavy reheat temperature $T_{RH}$. We consider hereafter such configurations assuming that only the messenger (including the spurion) sector can be produced and illustrate its relevance to the issue of the CDM. Curing a Messenger Problem ========================== The mass degeneracy within a supermultiplet of messenger fields is lifted by susy breaking leading to a lighter and a heavier scalar messengers with masses $M_{A \pm} =M_X(1 \pm {k F/M_X^2})^{1/2}$ and a fermionic partner with mass $M_X$ (where $F$ and $M_X$ are related to the dynamical scale $\Lambda$). Thus ${k F/M_X^2} < 1$. Moreover, one has to require ${k F/M_X} {\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}10^5 \mbox{GeV}$ to ensure an MSSM spectrum ${\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}{\cal O}(1) \mbox{TeV}$. One then expects typically $M_X {\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~}10^5 \mbox{GeV}$. In GMSB models the lightest messenger particle (LMP) with mass $M_{-}$ is stable due to the conservation of a messenger quantum number. If present in the early Universe the messenger particles are thermalized through their gauge interactions with the thermal bath. The corresponding LMP relic density is calculable similarly to the case of Neutralino LSP albeit an extended particle content and couplings. However, it turns out to be typically too large to account for the CDM even in the most favorable case of the electrically neutral component of a $\mathbf{5}+\mathbf{\overline{5}}$ representation of $SU(5)$ where it is found to scale as $\Omega_M h^2 \simeq 10^5 \left({M_{-}/10^3 TeV}\right)^2$ with the LMP mass [@DGP]. The situation is even worse in the case of $SO(10)$ where the LMP is an MSSM singlet with a suppressed annihilation cross-section leading to a very large relic density. One possible cure to this messenger overcloser problem is to allow the LMP to decay to MSSM particles. This can be easily achieved by adding renormalizable but messenger number violating operators to the superpotential, however, such low-scale operators would tend to ruin the nice FCNC suppression of GMSB models. A more appealing approach is to insist on the messenger number conservation as a consequence of a discrete accidental symmetry at low energy and invoke the typical violation of such a (non gauge) symmetry by gravitational interactions [@ross] once the GMSB model is coupled to supergravity. The LMP decay would then occur via Planck mass suppressed operators in the Lagrangian, which can originate either directly from effective non-renormalizable operators in the Kähler or the superpotential, or indirectly after susy breaking through (holomorphic) renormalizable operators in the Kähler potential. In the latter case the suppression is controlled by the gravitino mass. In the case of $SU(5)$, an exhaustive study of these operators was carried out in [@JLM04] for messengers transforming as $\mathbf{5}+\mathbf{\overline{5}}$ or $\mathbf{10}+\mathbf{\overline{10}}$. In $SO(10)$ with one set of messengers transforming as $\mathbf{16} + \overline{\mathbf{16}}$, the LMP decay can be induced by non-renormalizable operators in the Kähler potential, e.g. $K \supset \mathbf{16}_F \overline{\mathbf{16}}_M^\dag \mathbf{10}_H/m_{\rm Pl}$, or in the superpotential, e.g. $W \supset \overline{\mathbf{16}}_M {\mathbf{16}}_F {\mathbf{16}}_F \mathbf{10}_H \times m_{\rm Pl}^{-1}$, leading respectively to $2$- and $3$-body decays, where $ \mathbf{16}_M (\overline{\mathbf{16}}_M), \mathbf{16}_F$ and $\mathbf{10}_H$ denote respectively the messenger, the standard matter and the electroweak Higgs supermultiplets. We assume a typical decay width $\Gamma_{\rm LMP} = (1/16\pi) f' M_X^3/m_{\rm Pl}^2$ where $f'$ parameterizes our ignorance of the couplings and possible further phase space suppression. For couplings of ${\cal O}(1)$, $f'\simeq 1 ( 3 \times 10^{-3})$ for $2$- ($3$-body) decays into essentially massless particles. On the one hand, such suppressed decays would not upset the FCNC suppression, and on the other, will turn out actually to be a blessing regarding a solution to the gravitino overproduction in the early universe, eventually allowing the gravitino to account for the [*cold*]{} dark matter in the context of GMSB models. Gravitino abundance =================== the cosmological set-up ----------------------- In favorable parts of the parameter space, the LMP late decay into MSSM particles can release enough entropy to dilute the initial gravitino relic density down to a level which can account for the CDM in the Universe even for very high $T_{RH}$, [@FY02], [@JLM04], [@JLM05]. For this to work, though, the LMP should dominate the Universe energy density before it decays, and should decay after the gravitino has freezed-out from the thermal bath. The necessary conditions $T_d < T_{MD}, T^f_{3/2}$ \[where $T_d, T_{MD}, T^f_{3/2}$ denote respectively the LMP decay and messenger matter domination temperatures, and the gravitino freeze-out temperature\] is then determined by the particle properties and annihilation cross-section and decay width of the LMP, delineating the favorable parts of the parameter space. We have studied this scenario in detail for the case of $SU(5)$ [@JLM04] and $SO(10)$, [@JLM05], [@CKLM]. Here we concentrate on the latter case with one set of messengers transforming as $\mathbf{16} + \overline{\mathbf{16}}$. The entropy release $\Delta S \equiv S_{\rm after}/S_{\rm before}$, diluting the initial gravitino density, is determined by the temperatures before and after LMP decay and can be approximated to $T_{MD}/T_d$. $T_{MD}$ is given by the LMP yield and mass ($T_{MD}\simeq (4/3)M_{-} \times Y_{\rm LMP}$) and $T_d$ is determined by the LMP width ($\Gamma_{\rm LMP} \simeq H(T_d)$). The LMP yield $Y_{\rm LMP}$ is controlled by the LMP annihilation into MSSM particles which enters the corresponding Boltzmann equation. Since in our case the LMP is an $SU(5)$ singlet [@DGP], [@JLM05], this annihilation proceeds via loop effects of virtual messengers ($A_M, \psi_M$) and spurion ($S$) exchange. We consider here the leading annihilation cross-section into $2$ gluons, fig.1, and parameterize its thermal averaged as $\langle \sigma_{1{\rm loop}}v\rangle \sim f \times (\alpha_s/4\pi)^2 \kappa^4/s$ where $\kappa$ is the spurion-messenger coupling ($W \supset \kappa \hat{S} \mathbf{16}_M\overline{\mathbf{16}}_M$), $\alpha_s$ the strong coupling constant, $\sqrt{s}$ the C.M. energy and $f$ a form factor depending on the internal masses and couplings. Neglecting the very heavy GUT sector contributions which typically decouple, one finds $$\begin{aligned} && f = \frac{32}{\pi} |(3/8) \bar{g}^2 + M_{A_{-}}^2 C_{-} + ( (3/4) \bar{g}^2 - 1) M_{A_{+}}^2 C_{+} \nonumber \\ && + \; 2 M_X^2( M_{A_{-}}^2 C_{-} + M_{A_{+}}^2 C_{+} + ( s - 4 M_X^2) C_X -1 ) \; D[s]|^2 \nonumber \end{aligned}$$ where $D[s]$ denotes the spurion propagator $(s - M_S^2 + i \Gamma_S M_S)^{-1}$, $C_{\pm, X}$ are $C_0$ functions scaling as $s^{-1}$, and $\bar{g}^2 \equiv 4 \pi \alpha_s/\kappa^2$. Since the messengers in the loops carry color charges, a substantial mass splitting occurs between the contributing $A_{-}$ states and the LMP due to RGE running from the GUT scale down to $Q^2=M_{-}^2$, as well as from genuine loop corrections [@DGP96], [@JLM05], leading typically to $M_{A_{-}} \simeq 3 M_{-}$. Such effects, as well as the running of $\alpha_s$ should be taken into account for a precise determination of $Y_{LMP}$. On the other hand, the contribution of the scalar spurion depends on its mass and width. The spurion can be either heavier or lighter than the messenger sector. Here we consider only the former case, where the spurion decays at tree-level into pairs of messengers or at one-loop into MSSM particles.[^2] We find that the decay into MSSM particles dominates irrespective of the value of the coupling $\kappa$, the total width $\Gamma_S$ remaining small though ($\Gamma_S/M_S \simeq (1 - 4) \times 10^{-3}$). A careful treatment (beyond the relative velocity expansion) of the thermally averaged annihilation cross-section is thus required close to this narrow resonance, i.e. typically when $M_{-} \simeq M_S/2$ and assuming a non-relativistic decoupling of the LMP from the thermal bath. relic gravitinos ---------------- When the necessary temperature conditions discussed in the previous section are met, the final gravitino relic density is given by $\Omega_{{}_{grav}} = \Omega_{{}_{grav}}^{th}/\Delta S + \Omega_{{}_{grav}}^{{}^{Mess}} + \Omega_{{}_{grav}}^{{}^{NLSP}}$ where the last two contributions denote non-thermal production through late decays or scattering. One should also consider various cosmological constraints (hotness/warmness, BBN, species dilution, etc...). Let us illustrate the case first with some effective fixed values for $f$ and $f'$. This is shown in fig.2 taking $T_{RH} \simeq 10^{12}$ GeV, see also [@JLM04]. The red horizontal shading shows the theoretically excluded region where $ k > 1$; the other red shading indicates the region excluded by BBN constraints. If the spurion is heavier than the LMP, gravitino cold DM (green region) occurs for relatively light LMPs and $m_{3/2} \sim 1 \mbox{kev} - 10 \mbox{MeV}$. Note that without the LMP induced entropy dilution, the reheat temperature would have been constrained to be several orders of magnitude lower than $10^{12}$ GeV in order to avoid overcloser for the range of gravitino masses found here. More generally, in the models of ref. [@GMSB] one finds [@JLM05] $\Omega_{grav}h^2 \,\simeq\, 10^3 f^{0.8} \kappa^{3.2} f'^{1/2}\left({M_{-} \over 10^6\,{\rm GeV}}\right)^{-0.3} \times \left({m_{3/2} \over 1\,{\rm MeV}}\right)$ for non-relativistic LMP freeze-out, putting the gravitino relic abundance in the ballpark of WMAP results, for $\kappa \sim {\cal O}(10^{-1})$ and typical ranges for $f$ and $f'$. Considering now the specific one-loop form-factor $f$ given in the previous section and assuming for the sake of the illustration a spurion much heavier than the messenger sector, we still find regions consistent with WMAP results for $\Omega h^2$ at the 99% confidence level, e.g. for $M_X = 10^6 - 10^8$ GeV, one has $1.1 \mbox{MeV} < m_{3/2} < 4$ MeV for a three body decay LMP, and $65 \mbox{keV} < m_{3/2} < 230$ keV, for a two body decay LMP. For heavier spurions the annihilation into 2 gravitinos through tree-level gravitational interactions (see fig.1) becomes significant as its cross-section scales like $\langle \sigma v\rangle\,\simeq\, (1/ 24\pi) k^2 M_-^2/(m_{3/2} m_{\rm Pl})^2 $. It can dominate the 1-loop annihilation, eventually saturating the unitarity limit (the black dashed line in fig.2), thus disfavouring gravitino CDM solutions for $M_{-} {\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~}10^8$ GeV. Conclusion ========== Light gravitino can account for CDM irrespective of $T_{RH}$, making it a good DM candidate in GMSB: typically if $T_{RH} {\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}10^5 \mbox{GeV}$ then the messengers are not produced and thermal gravitinos with $m_{3/2} {\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}1 \mbox{MeV}$ provide the right CDM density, while for $T_{RH} {\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~}10^5 \mbox{GeV}$ the messenger can be present and should be unstable, thus providing a source of entropy production that can reduce a thermally overproduced gravitino to a cosmologically acceptable level. Moreover, various constraints (e.g. on $T_{RH}$, [@SP], or on the gravitino mass [@VLHMR]) simply do not apply in the scenarios we have illustrated, thus escaping possible tension with thermal leptogenesis. Finally, let us mention that such scenarios can potentially allow to avoid the recently studied cosmological gravitino problem due to inflaton decay [@ETY]. \[fig1\] \[fig2\] ![image](JLM05_f5){height=".3\textheight"} [999]{} K. Intriligator, N. Seiberg and D. Shih *JHEP 0604:021,2006*. see also, H. Murayama these proceedings. for a review of the earlier GMSB model-building and phenomenology see for instance G.F. Giudice, R. Rattazzi, *Phys.Rept 322: 419, 1999*, and references therein. S.A. Abel, C.-S. Chu, J. Jaeckel, V.V. Khoze, *JHEP 0701:089,2007*, *JHEP 0701:015,2007*; N.J. Craig, P.J. Fox, J.G. Wacker *PRD75:085006,2007*; W. Fischler, V. Kaplunovsky, C. Krishnan, L. Mannelli, M.A.C.  Torres, *JHEP 0703:107,2007*. S. Dimopoulos, G.F. Giudice, A. Pomarol, *PLB 389:37,1996*. L.E. Ibanez, G.G. Ross, *NPB 368:3-37, 1992.* K. Jedamzik, M. Lemoine, G. Moultaka, *PRD 73:043514,2006*; see also G. Moultaka, *Acta Phys.Polon.B38:645,2007*. M. Fujii, T. Yanagida, *PLB 549:273,2002*; see also E. A. Baltz, H. Murayama, *JHEP 0305:067, 2003*. M. Lemoine, G. Moultaka, K. Jedamzik, *PLB 645:222,2007*. M. Capdequi-Peyranère, M. Kuroda, M. Lemoine, G. Moultaka, *to appear*. S. Dimopoulos, G. F. Giudice, A. Pomarol, *PLB 389:37, 1996*; T. Hahn, R. Hempfling, *PLB 415:161, 1997*. M. Dine, A.E. Nelson, *PRD 48:1277, 1993*; M. Dine, A.E. Nelson, Y. Shirman, *PRD 51:1362, 1995*; M. Dine [*et al.*]{} *PRD 53:2658, 1996*. J. Pradler, F.D. Steffen, *PLB 648:224,2007*. see also F.D. Steffen, these proceedings. M. Viel, J. Lesgourgues, M.G. Haehnelt, S. Matarrese, A. Riotto, *PRD 71:063534,2005*. M. Endo, F. Takahashi, T.T. Yanagida, *arXiv:hep-ph/0701042*, and *PRD 76:083509,2007* \[arXiv:hep-ph/0706.0986v2\]. [^1]: based on work in collaboration with K. Jedamzik ([*LPTA-Montpellier*]{}), M. Lemoine ([*IAP-Paris*]{}) [@JLM04], [@JLM05], and work in progress, M. Kuroda (Meiji-Gakuin), M. Lemoine (Paris), M. Capdequi-Peyranère (Montpellier). [^2]: If the spurion is lighter than the LMP, an efficient tree-level annihilation of the latter into a pair of spurions would lead to a too small $Y_{LMP}$.
{ "pile_set_name": "ArXiv" }
Mathias Hafele Mathias Hafele (born 23 December 1983) is an Austrian former ski jumper who competed from 2002 to 2007. His best finish at World Cup level was second place in Engelberg on 21 December 2002, which was his only top 10 result. He also finished third overall in the 2005/06 Continental Cup season. External links Category:1983 births Category:Living people Category:Austrian male ski jumpers Category:People from Lienz District
{ "pile_set_name": "Wikipedia (en)" }
Q: how to display bitmap sprite in AndEngine? I am new to AndEngine Game Development. I am trying to load and display a single sprite , But i got blank blcak screen without sprite on the screen. Here is the code: public class MainActivity extends SimpleBaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; private static final String TAG = "AndEngineTest"; private BitmapTextureAtlas mBitmapTextureAtlas; private TextureRegion mPlayerTextureRegion; @Override public EngineOptions onCreateEngineOptions() { Camera mCamera = new Camera(0, 0, CAMERA_WIDTH , CAMERA_HEIGHT); return new EngineOptions(true,ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override protected void onCreateResources() { mBitmapTextureAtlas = new BitmapTextureAtlas(this.getTextureManager(), 32, 32, TextureOptions.BILINEAR_PREMULTIPLYALPHA); mPlayerTextureRegion = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mBitmapTextureAtlas, this, "gfx/face_box.png", 0, 0); mBitmapTextureAtlas.load(); } @Override protected Scene onCreateScene() { this.mEngine.unregisterUpdateHandler(new FPSLogger()); Scene scene = new Scene(); scene.setBackground(new Background(3f, 6f, 2f)); Sprite Player = new Sprite(32, 32, mPlayerTextureRegion, getVertexBufferObjectManager()); Camera mCamera = new Camera(0, 0, CAMERA_WIDTH , CAMERA_HEIGHT); Player.setPosition(mCamera.getWidth()/2 - Player.getWidth()/2, mCamera.getHeight() - Player.getHeight() - 10); scene.attachChild(Player); return scene; } Can anyone tell what is the mistake here ??? Any useful help will be appreciaed . A: You have a few issues with your code: Change unregisterUpdateHandler to registerUpdateHandler Your Camera and Sprite could/should be global variables referenced from the methods they are used in. Change Player to player (this is just a convention but is useful to follow). When you initialise player the first 2 parameters are the position not the size - this will save you having to make the extra method call to setPosition(). Write BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); at the beginning of onCreateResources() and change "gfx/face_box.png" to "face_box.png". 1,2 and 4 are pretty much essential, 3 and 5 are optional but I would recommend them to simplify things. Does this solve the problem?
{ "pile_set_name": "StackExchange" }
Chinese Archery is a broad view of traditional archery in China as seen through the eyes of historians, philosophers, poets, artists, novelists and strategists from 1500 BC until the present century. The book is written around parallel text translations of classical chinese sources some famous and some little known in which Chinese writers give vivid... more...
{ "pile_set_name": "Pile-CC" }
Q: Download file as csv when running script from command line When I run my php script from a browser, I get a csv file to automatically download to the client using $fp = fopen('php://output', "w");, fputcsv and the appropriate header tags. It works very nicely. When I run my script from the command line using php index.php 2014 ("2014" being the argv I am passing in), then the contents of what would normally go in to the csv actually appears in the command line box. How can I get my code to still download a csv file when using the command line? If it matters I am running ubuntu using Vagrant and VirtualBox. A: You can update the line as follows, where $$filepath is the file path name. $fp = fopen($$filepath, "w");
{ "pile_set_name": "StackExchange" }
Words by While it's easy to pinpoint key women of style, the jury's still out with men. So you’d be forgiven for expecting this to be a compendium of three-piece suits, pocket squares and loafers. But Men of Style steps beyond the obvious characterisation of masculine style as a well- coordinated suit. Instead it adopts a more accurate definition: a stylish man dresses in his own way, not the way of everyone else. Those profiled here are a little unexpected. There are, of course, inclusions like James Dean and Miles Davis, but these sit alongside the likes of Pablo Picasso and Bob Marley. You can expect a newfound knowledge of these men’s style, right down to the finest quirks. For example, did you know Fred Astaire wore his belt buckle to the side? This review was originally published in Fashion Journal 163. You can read it here.
{ "pile_set_name": "Pile-CC" }
Here are some major results obtained in this research project during the past year: Microarray gene expression studies over ordered categories are routinely conducted to gain insights into biological functions of genes and the underlying biological processes. Some common experiments are time-course/dose-response experiments where a tissue or cell-line is exposed for different doses and/or durations of time to a chemical. A goal of such studies is to identify gene expression patterns/profiles over the ordered categories. In some instances data across ordered groups are correlated, for example when repeated measurements are taken over the same subject over time. In this research program we developed methodology that accounts for such correlations. Researchers routinely use historical control data (HCD) when analyzing rodent carcinogenicity data obtained in a particular study. Although the concurrent control group is considered to be the most relevant group to compare with the dose groups, the HCD provides a broader perspective to assist in understanding the significance of the current study. The HCD is used to provide information about the incidences of spontaneous tumors and malignant systemic disorders such as lymphoma and leukemia. In this research program we developed a simple statistical methodology that can be used for comparing the tumor response in the current control group with that of the historical controls. We demonstrate that the commonly used historical range based methodology can result in an unacceptably high false positive rate whereas the proposed method controls the false positive rate at the desired nominal level.
{ "pile_set_name": "NIH ExPorter" }
Allogeneic hematopoietic stem cell transplantation in patients with diffuse large B cell lymphoma relapsed after autologous stem cell transplantation: a GITMO study. Patients who relapse after an autologous hematopoietic stem cell transplantation (SCT) have a very poor prognosis. We have retrospectively analyzed diffuse large B cell lymphoma patients who underwent an allo-SCT after an auto-SCT relapse reported in the Gruppo Italiano Trapianto di Midollo Osseo (GITMO) database. From 1995 to 2008, 3449 autologous transplants were reported in the GITMO database. Eight hundred eighty-four patients relapsed or progressed after transplant; 165 patients, 19% of the relapsed patients, were treated with allo-transplant. The stem cell donor was related to the patient in 108 cases. A reduced intensity conditioning regimen was used in 116. After allo-SCT, 72 patients (43%) obtained a complete response and 9 obtained a partial response with an overall response rate of 49%; 84 patients (51%) experienced rapid progression of disease. Ninety-one patients died, 45 due to disease and 46 due to treatment-related mortality. Acute graft-versus-host disease was recorded in 57 patients and a chronic GvHD in 38 patients. With a median follow-up of 24 months (2-144) after allo, overall survival (OS) was 39%, and after a median of 21 months (2-138) after allo, progression-free survival (PFS) was 32%. Multivariate analysis indicated that the only factors affecting OS were status at allo-SCT, and those affecting PFS were status at allo-SCT and stem cell donor. This retrospective analysis shows that about one-fifth of patients with diffuse large B cell lymphoma who experience relapse after autologous transplantation may be treated with allogeneic transplantation. Moreover, the only parameter affecting either OS or PFS was the response status at the time of allo-SCT.
{ "pile_set_name": "PubMed Abstracts" }
1. Introduction {#sec1} =============== Chronic kidney disease is a worldwide health problem that carries a significant risk of cardiovascular morbidity and mortality. Endogenous filtration markers have been used as tests of kidney function, with serum creatinine as the most widely applied marker. Estimated glomerular filtration rate (eGFR) based on serum creatinine (eGFR~creat~) does not fully account for non-GFR determinants of creatinine (muscle mass, race, age, and gender). An alternative endogenous serum biomarker, cystatin C, has been proposed for estimating renal function that can replace or supplement serum creatinine. In multiple studies it has been shown to be more sensitive for predicting adverse events than serum creatinine or eGFR~creat~. This parameter also showed greater sensitivity to detect mild reductions in renal function and improved the identification of patients with higher cardiovascular risk in epidemiological studies \[[@B1]--[@B3]\]. The association of cystatin C with metabolic syndrome and classic cardiovascular risk factors is also well documented \[[@B4]--[@B6]\]. This association may reflect the inflammatory components of the syndrome. A positive correlation between cystatin C and inflammation parameters including interleukin-6, resistin, tumor necrosis factor, and C-reactive protein has been reported \[[@B6]\]. Cystatin C is emerging as a new biomarker in cardiovascular disease \[[@B7]\]. All the aforementioned may suggest that cystatin C could be more useful for predicting adverse clinical events and become a clinical tool to optimize the estimation of glomerular filtration rate \[[@B3]\]. The purpose of our study was to analyze whether cystatin C and eGFR formulas based on cystatin C (eGFR~cyst~) identify a subgroup of patients with an increased risk of progression of renal failure, cardiovascular events, and overall mortality among a group of selected patients, improving the standard method of creatinine (eGFR~creat~) for the diagnosis and followup of renal failure. 2. Subjects and Methods {#sec2} ======================= We conducted a longitudinal, observational, and retrospective cohort study of a sample extracted from 589 patients referred to the Nephrology clinic between June 2005 and May 2011, derived from Primary Care with the diagnosis of renal failure, defined by a glomerular filtration formula estimated through eGFR~creat~ \< 90 mL/min/m² and confirmed in a second determination in a 3-month period. Those who had at least a cystatin C determination in this period were selected. Those with thyroid dysfunction or inflammatory pathology or receiving steroid treatment, factors all known to alter the concentration of serum cystatin C, were excluded and only those that had a minimum followup of 3 months were included. At the end, 180 patients were selected. The Nephrology Clinic covers the Health Care Area at the town of Leganes, a suburb near Madrid, with a population of 187,227 inhabitants registered during the study period. Cardiovascular events (heart failure, acute myocardial infarction, and stroke) and mortality for both cardiovascular events and other causes were registered during followup. An acute myocardial infarction was diagnosed when there was evidence of myocardial necrosis in association with clinical signs of myocardial ischemia. Necrosis was diagnosed on the basis of a rising or falling pattern of the local cardiac troponin level. Stroke (ischemic or hemorrhagic) was defined as an acute reduction of cerebral blood flow causing transient or permanent loss of neurologic function. An acute decompensated heart failure was diagnosed on the basis of the presence of at least one symptom (dyspnea, orthopnea, or edema) and one sign (rales, peripheral edema, ascites, or pulmonary vascular congestion on chest radiography) of heart failure. Death was documented in the medical report released to the Admission Service. Therefore, patients who could die at home or in other Center were not registered. Cardiovascular events were documented from the medical reports used during hospitalization and/or the emergency services. Events occurring in other centers were included only when a medical report of the corresponding center recorded the fact. A renal event was defined as the development of eGFR~creat~ ≤ 20 mL/min/1.73 m^2^ during the follow-up. 2.1. Analytical Methods {#sec2.1} ----------------------- The serum concentration of cystatin C was measured using nephelometry BNII, Siemens. Albuminuria determination was conducted in first morning voided urine using the albumin/creatinine ratio. In cases of albuminuria values \> 400 mg/gr creatinine, proteinuria determination was performed in a 24 h urine collection. Glomerular filtration rate (GFR) at baseline and during followup was estimated by the following equations: $$\begin{matrix} {\text{eGFR-EPI-cyst} = {127.7 \times \text{cyst}}^{- 1.17} \times \text{age}^{- 0.13}} \\ {\quad \times \left( {0.91{\,\,}\text{if}{\,\,}\text{female}} \right) \times \left( {1.06{\,\,}\text{if}{\,\,}\text{black}} \right)} \\ \end{matrix}$$ (see \[[@B8]\]), $$\begin{matrix} {\text{eGFR-EPI-creat} = 141 \times \min\left( {\text{SCr}/\kappa,1} \right)^{\alpha}} \\ {\quad \times \max\left( \text{SCr}/\kappa,1 \right)^{- 1.209} \times 0.993^{\text{age}}} \\ {\quad \times 1018\left( \text{if}{\,\,}\text{female} \right) \times 1.159\left( \text{if}{\,\,}\text{black} \right),} \\ \end{matrix}$$ where SCr is serum creatinine in mg/dL, *κ* is 0.7 for females and 0.9 for males, *α* is −0.329 for females and −0.144 for males, min indicates the minimum of SCr/*κ* or 1, and max indicates the maximum of  SCr/*κ* or 1 \[[@B9]\]. 2.2. Statistical Analysis {#sec2.2} ------------------------- In the descriptive study results are expressed as mean and standard deviation or median and interquartile range for continuous variables depending on whether they followed or not a normal distribution. Qualitative variables were expressed as absolute and relative frequencies. Variables cystatin C and creatinine were grouped into tertiles. We calculated the incidence rate of cardiovascular events and mortality for these tertiles. Cox regression analysis was performed to calculate the risk of events, adjusted for age, sex, BMI, previous cardiovascular events and tobacco consumption. For the contrasts, univariate analysis of variance or Kruskal-Wallis test or logistic regression was performed. For multivariate analysis we used test of binary logistic regression adjusted for age, sex, BMI, previous cardiovascular events and tobacco consumption. The selection of variables was performed by the Wald method. Confidence intervals were calculated at 95%. The level of statistical significance was *α* \< 0.05. All analysis was performed using SPSS version 15.0 (SPSS Inc., Chicago, IL). The primary end point was to analyze the risk of overall mortality in relation to renal function. As secondary end point we analyze the development of cardiovascular events (fatal and nonfatal) or entry on dialysis for ESRD. 3. Results {#sec3} ========== 3.1. Baseline Characteristics of the Study Population {#sec3.1} ----------------------------------------------------- The total number of patients was 180 (52% females). Median age was 75 (69--82) years and mean followup was 36.55 ± 15.98 months. [Table 1](#tab1){ref-type="table"} describes the baseline characteristics of the study sample by tertile\'s category of cystatin C and serum creatinine at baseline. Patients with higher levels of cystatin C (third tertile) were older and had higher levels of creatinine and lower eGFR~creat~ and eGFR~cyst~. Baseline cholesterol was lower in patients with higher cystatin C levels. Patients with higher levels of creatinine (third tertile) were predominantly male and had higher cystatin C while eGFR~creat~ and eGFR~cyst~ were lower. The rest of the variables analyzed showed no differences in any of the two categories. 3.2. Incidence of Cardiovascular Events and Overall Mortality {#sec3.2} ------------------------------------------------------------- Total followup of the study was 525 persons/year. The incidence of global cardiovascular events was 306/1000 person-years ([Table 2](#tab2){ref-type="table"}) without differences between the patients with higher levels of cystatin C or creatinine. Nonfatal cardiovascular events (235/1000 person-years) showed neither difference by category neither of cystatin C nor of creatinine. Compared with the third tertile, patients on the second tertile of cystatin C had a lower risk of fatal cardiovascular event (HR = 0.198, 95% CI: 0.040--0.987). Global mortality was also lower for the first cystatin C tertile (HR = 0.060, 95% CI: 0.008--0.447) and for the second tertile (HR = 0.094, 95% CI: 0.022--0.406) with respect to the third tertile. For the tertiles of creatinine we also found a lower risk in the first and the second than in the third (HR = 0.178, 95% CI: 0.039--0.805 and HR = 0.329, 95% CI: 0.115--0.442, resp.). Causes of death were cardiovascular events in 12 patients, infectious cause in 7 patients, and tumor in 1 patient. A patient died in end-stage renal disease (ESRD) after rejecting dialysis. There were 11 renal events that were less frequent in patients with lower levels of creatinine (HR = 0.142, 95% CI: 0.035--0.577). A lower level of cystatin C was associated with a reduced incidence of renal events, although without showing differences among tertiles. Entry on dialysis only occurred in two patients during the follow-up. 3.3. Renal Function at Baseline {#sec3.3} ------------------------------- Mean eGFR~creat~ value at the beginning of the study was 38 (33--49) mL/min/1.73 m^2^. Their distribution was 19 patients (11%) in stage 2, 137 patients (75.7%) in stage 3, and 24 (13.2%) in stage 4. The medium eGFR~cyst~ at the beginning of the study was 41 (32--52) mL/min/1.73 m^2^; 27 patients (15.5%) were in stage 2, 118 patients (65.2%) in stage 3, and 35 patients (19.3%) in stage 4 according to NKF KDOQI classification \[[@B8]\]. When comparing the stage of renal failure between eGFR~creat~ and eGFR~cyst~, for a matching on the estimated stage of kidney failure, we found a concordance in 63.88% of patients (*n* = 115). Discordance of both took place in 17.7% (*n* = 32 patients) with eGFR~creat~ \< eGFR~cyst~ and 18.3% (*n* = 33 patients) with eGFR~cyst~ \< eGFR~creat~ ([Figure 1](#fig1){ref-type="fig"}). 3.4. Multivariate Analysis {#sec3.4} -------------------------- Cystatin C categorized into tertiles and baseline uric acid levels were the only independents predictors of overall mortality, adjusted for age, sex, BMI, tobacco consumption, and a history of a previous cardiovascular event ([Table 3](#tab3){ref-type="table"}, Figures [2](#fig2){ref-type="fig"} and [3](#fig3){ref-type="fig"}). 4. Discussion {#sec4} ============= In our study death was more frequent than the progression to ESRD. Unlike creatinine, basal serum cystatin C was a predictor of overall mortality and of the development of fatal cardiovascular events. We also found that basal serum uric acid was an independent predictor of overall mortality along with cystatin C. The higher incidence of exitus versus initiation of dialysis has also been described in at least two other previous studies. In one of them analyzing the natural history of CKD in a population of 27,998 patients in the USA, with an estimated GFR \< 90 mL/min/1.73 m², Keith et al. \[[@B10]\] found that the incidence of dialysis treatment during a 5-year follow-up was 1.1%, 1.3%, and 19.9% for stages 2, 3, and 4, respectively, of the classification renal K/DOQI. However, mortality was 19.5%, 24.3%, and 45.7%. The authors conclude that death was more frequent than entering on dialysis at all stages of kidney failure. In the second observation, Go et al. \[[@B11]\] found an independent association between eGFR~creat~ \< 60/mL/min/m² and the risk of death or hospitalization for cardiovascular events in a cohort of 1,120,295 adults in a community followed an average of 2.8 years. The patients in our series were selected by their renal insufficiency, with 88% of them in stage ≥3 as classified by K/DOQI \[[@B12]\]. They as well had a higher mean age and 39% exhibit a type 2 diabetes mellitus diagnosis. As we only considered deaths occurring at our center (either at the emergency room or during hospitalization) it is likely that we underestimated total mortality since we can not rule out that some deaths occurred at home or in other hospital. These results suggest that the population finally developing ESRD could be considered as a specific group of patients surviving other causes of death and thus reaching the point of chronic dialysis. The second finding of our study was that cystatin C levels were a strong independent predictor of overall mortality and cardiovascular mortality. Patients with lower levels of cystatin C had an incidence of fatal cardiovascular events and overall mortality significantly lower compared with the higher, something not happening with creatinine levels. These results are consistent with previous studies. Shlipak et al. \[[@B13]\] using a cohort of 4663 participants in the Cardiovascular Health Study (CHS) \[[@B14], [@B15]\] recruited from four U.S. communities studied cystatin C as a prognostic biomarker of death, cardiovascular disease, and incidental chronic kidney disease in people \> 65 years without previous renal disease. They conclude that serum cystatin C was a better predictor than creatinine for the development of the mentioned events and identifies a state of "preclinical" renal dysfunction with cystatin C, which would not be detected by eGFR~creat~ alone. Using data from the MESA and CHS study, Peralta et al. \[[@B16]\] found that the eGFR~cyst~ predicts death and cardiovascular complications better than eGFR~creat~ and identifies nondiabetic patients with CKD stage 3 not detected by eGFR~creat~, with an increased risk of complications. In our population hyperuricemia, along with cystatin C, was as well an independent predictor of overall mortality. A large number of epidemiological studies have suggested the independent role of hyperuricemia in overall mortality, cardiovascular disease, and kidney disease in the general population \[[@B17]--[@B21]\]. However, in patients with this condition, it is less clear whether uric acid is just a marker that reflects a set of comorbidities and kidney damage or a true causal risk factor. Three previous works \[[@B22]--[@B24]\] studied the association between uric acid and mortality in patients with chronic renal failure. Concerning the follow-up of renal function we have only analyzed the deterioration of renal function, defined as renal event, and the low number of patients who progressed to ESRD and dialysis prevented us from performing a statistical analysis of this variable. The incidence of renal events was only related to higher levels of basal serum creatinine and not to basal cystatin C levels. When comparing renal function estimated by the two markers (creatinine and cystatin C) concordance and discordance in the stage of renal failure was similar to that found by other authors. Krolewski et al. \[[@B25]\], in a study of two cohorts of diabetic patients, concluded that renal function estimated with cystatin C significantly improves the prediction of the risk of progression to ESRD compared to the estimate achieved with creatinine. Our results allow us to venture that with a greater number of patients and higher renal events our findings would probably be similar to their work. Being retrospective, our study has several limitations. Only patients with a determination of cystatin C, 36.16% of transferred patients, were included. Of these, 84.5% had renal failure stage ≥ 2. We ignore the criteria for this first cystatin C determination in each case, although it is logical to assume that it was performed as a parameter of renal function in addition to serum creatinine. The insufficient number of patients with progression to ESRD prevented us from performing a statistical study of one of our goals: to check whether cystatin C was a more sensitive marker than creatinine for predicting the development of ESRD. However, our findings about the predictive value of hyperuricemia and cystatin C for the development of fatal cardiovascular events and global mortality in a referral population at a high risk for progression of renal disease have, in our opinion, clinical relevance. Our study, as those of other authors \[[@B26]\], supports the use of a combination of markers to improve the detection and risk stratification of patients with high cardiovascular risk and chronic renal failure. The follow-up time was relatively short (average 3 years) but the high mean age of our patients could make up for this draw back. Another weakness stems also from the retrospective nature of the study: some of the analyzed events were probably developed at other centers and were not included. Thus, death rate may be underestimated (only those occurring at our center were considered). Despite this, the number of total events seems to us sufficient to draw conclusions with enough statistical power. In conclusion, our study found that during a follow-up period of 3 years, cystatin C and hyperuricemia were the only independent predictors of total mortality. Unlike creatinine cystatin C was predictive of fatal cardiovascular events. These findings support the usefulness of including uric acid and cystatin C as markers for the assessment of cardiovascular risk morbidity and mortality in patients with chronic renal failure. The authors thank Dr. Javier Marco for his critical reading and contribution to the redaction of this paper. Conflict of Interests ===================== There is no conflict of interests to declare. ![Distribution of patients according to estimated GFR defined by cystatin C and creatinine tertiles. CKD: chronic kidney disease; eGFR~creat~ and eGFR~cyst~: estimated glomerular filtration rate according to creatinine and cystatin C; T1: tertile 1; T2: tertile 2; T3: tertile 3.](IJN2014-127943.001){#fig1} ![Survival and tertiles of cystatin C.](IJN2014-127943.002){#fig2} ![Survival and baseline uric acid levels.](IJN2014-127943.003){#fig3} ###### Baseline characteristics of the study sample by tertile\'s category of cystatin C and serum creatinine. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total\ Cystatin C *P* value Creatinine *P* value *N* = 180 ---------------------------------- ------------------- ------------------- ------------------- ------------------- ----------- ------------------- ------------------- ------------------- --------- Age (years)\* 75 (69--82) 70 (65--78) 74 (70--80) 81 (74--83) \<0.001 75 (67--80) 76 (71--82) 74 (69--82) 0.362 Gender (female%) 95 (52) 36 (61) 32 (53) 27 (44) 0.744 32 (54) 36 (64) 17 (26) \<0.001 SBP (mm Hg)\* 140 (130--150) 140 (130--150) 140 (130--150) 140 (125--150) 0.006 140 (130--150) 140 (130--150) 140 (126--160) 0.770 DBP (mm Hg)\* 80 (70--80) 80 (70--80) 80 (71--89) 70 (62--80) 0.004 80 (75--80) 80 (70--80) 80 (68--80) 0.150 BMI (Kg/m^2^)\* 29 (26--33) 28.7 (26.8--32.6) 31.4 (27.2--33.6) 27.1 (23.9--32.2) 0.946 29.6 (26.5--32.9) 30.5 (26.7--33.9) 27.9 (24.6--32.0) 0.041 Proteinuria (gr/24 h)\* 0.14 (0.08--0.27) 0.14 (0.08--0.27) 0.13 (0.08--0.32) 0.15 (0.08--0.29) 0.092 0.13 (0.08--0.22) 0.13 (0.05--0.28) 0.15 (0.08--0.32) 0.318 Glucose (mg/dL)\* 102 (93--128) 106 (94--129) 104 (94--140) 98 (90--120) 0.006 103 (93--122) 111 (94--143) 100 (91--115) 0.058 Serum uric acid (mg/dL)\* 6.9 (5.8--8.1) 6.5 (5.7--7.4) 6.9 (5.9--7.9) 7.5 (6.1--9.8) 0.087 6.4 (5.5--7.4) 7.0 (5.9--8.4) 7.4 (6.2--9.4) 0.004 Total cholesterol (mg/dL)\* 185 (154--212) 188 (165--219) 182 (158--215) 175 (148--201) \<0.001 197 (168--225) 177 (151--201) 181 (151--202) 0.026 Serum creatinine (mg/dL)\* 1.5 (1.3--1.8) 1.3 (1.2--1.5) 1.6 (1.4--1.8) 1.7 (1.4--2.1) \<0.001 1.2 (1.1--1.3) 1.5 (1.4--1.6) 1.8 (1.7--2.0) \<0.001 eGFR~creat~ (mL/min/1.73 m^2^)\* 38 (33--49) 52 (42--66) 38 (34--44) 32 (28--36) \<0.001 56 (44--66) 36 (32--43) 33 (28--37) \<0.001 eGFR~cyst~ (mL/min/1.73 m^2^)\* 41 (32--52) 59 (52--67) 41 (39--44) 28 (24--32) \<0.001 54 (42--66) 39 (29--48) 36 (26--42) \<0.001 Serum cystatin C (mg/dL)\* 1.6 (1.3--1.9) 1.2 (1.1--1.3) 1.6 (1.5--1.7) 2.1 (1.9--2.5) \<0.001 1.3 (1.1--1.5) 1.6 (1.4--2.0) 1.8 (1.6--2.4) \<0.001 Diabetes mellitus, *n* (%) 70 (39) 19 (32) 25 (42) 26 (43) 0.436 19 (32) 28 (50) 23 (35) 0.113 Tobacco, *n* (%) 74 (41) 30 (51) 21 (35) 23 (38) 0.171 21 (36) 15 (27) 38 (58) 0.001 ACEI, *n* (%) 88 (49) 30 (51) 28 (47) 30 (49) 0.900 27 (46) 31 (55) 30 (46) 0.506 ARB, *n* (%) 47 (26) 19 (33) 15 (25) 30 (49) 0.386 16 (27) 17 (30) 14 (21) 0.533 Hypolipidemics treatment *n* (%) 80 (44) 19 (33) 25 (42) 25 (41) 0.481 26 (44) 24 (43) 30 (46) 0.934 Previous CV event (%) 68 (38) 17 (29) 21 (35) 30 (49) 0.070 13 (22) 27 (48) 28 (44) 0.008 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SBP: systolic blood pressure; DBP: diastolic blood pressure; BMI: body mass index; eGFR-EPI: estimated glomerular filtration rate according to creatinine (eGFR~creat~) and according to cystatin C (eGFR~cyst~); *n*: number; CV: cardiovascular; ACEI: angiotensin converting enzyme inhibitor; ARB: angiotensin receptor blocker. \*Data expressed as median and interquartil range. ###### Incidence of cardiovascular events and overall mortality by cystatin C and serum creatinine, categorized by tertiles.   Cystatin C Creatinine --------------------------------- -------------------------- -------------------------- ----- -------------------------- -------------------------- ----- Participants number 59 60 61 59 56 65 Persons/year 166 196 163 198 151 177 Total cardiovascular events              Participants number 13 17 23 16 15 22  Incidence/1000 persons-year 78 87 141 81 99 124  HR 0.782 (0.363--1.688) 0.743 (0.381--1.449) --- 0.802 (0.401--1.602) 0.715 (0.345--1.478) --- Fatal cardiovascular events              Participants number 0 2 10 1 4 7  Incidence/1000 persons-year 0 10 61 5 26 39  HR 0 **0.198 (0.040--0.987)** --- 0.126 (0.013--1.265) 0.403 (0.093--1.740) --- Non-fatal cardiovascular events              Participants number 13 15 13 15 11 15  Incidence/1000 persons-year 78 77 80 76 73 85 Total mortality              Persons-year 166 199 176 198 151 192  Participants number 1 2 18 2 5 14  Incidence/1000 persons-year 6 10 102 10 33 73  HR **0.060 (0.008--0.447)** **0.094 (0.022--0.406)** --- **0.178 (0.039--0.805)** **0.329 (0.115--0.442)** --- Renal events              Persons-year 166 197 166 198 151 177  Participants number 0 5 6 0 2 9  Incidence/1000 persons-year 0 25 36 0 13 51  HR 0 0.463 (0.095--2.254) --- 0 **0.142 (0.035--0.577)** --- Event risks were evaluated in Cox proportional model, adjusted for age, gender, BMI, previous cardiovascular event, and tobacco consumption. Values in bold letters means that  *P* \< 0.005; HR: hazard risk. ###### Estimation of total mortality and renal event risk. Event Parameters O.R. 95% CI *P* value ----------------- ------------------ ---------------- ---------------- ---------------- ------- Total mortality Uric acid levels   1.377 (1.070--1.773) 0.013 Cystatin C Tertile 1 0.062 (0.008--0.497) 0.009 Tertile 2 0.100 (0.021--0.463) 0.003 Renal events Creatinine Tertile 1 0 --- --- Tertile 2 0.156 (0.043--0.568) 0.005 Logistic regression analysis of total mortality and renal events, adjusted by age, gender, uric acid, and tobacco consumption. [^1]: Academic Editor: Greg Tesch
{ "pile_set_name": "PubMed Central" }
a downloadable network visualization application representing the connections between Flickr or LastFM users & tags. users can explore tags, as people, represented as bubbles, will be "pulled" towards them. a small demo movie is available after the break.
{ "pile_set_name": "Pile-CC" }
Arthroscopic treatment of triangular fibrocartilage tears. The triangular fibrocartilage complex is an intricate anatomic structure located at the ulnar aspect of the wrist. The triangular fibrocartilage is important to the stability and biomedical function of the ulnar carpus and distal radioulnar joint. This article reviews the anatomy and biomedical function of the triangular fibrocartilage. Diagnosis and treatment of traumatic injuries to the triangular fibrocartilage are also discussed.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to convert an arrayBuffer to a Uint8Array, in Deno? I'm using the fetch api to download an image in Deno. On the Response object, i'm calling the arrayBuffer() method, to get the final data of the response: const response = await fetch('https://www.example.com/someimage.jpg') const data = await response.arrayBuffer();//This returns an arrayBuffer. Then i want to write this data into a file, just like you would do in Node: await Deno.writeFile('./someimage.jpg' ,data) The image turns out empty. The docs say Deno.writeFile expects a Uint8Array, but i have no clue how to construct this from the arrayBuffer, that is received from the fetch response. How can this be done? A: You have to pass the ArrayBuffer to the Uint8Array constructor You cannot directly manipulate the contents of an ArrayBuffer; instead, you create one of the typed array objects or a DataView object which represents the buffer in a specific format, and use that to read and write the contents of the buffer. new Uint8Array(arrayBuffer); const response = await fetch('https://www.example.com/someimage.jpg') const data = await response.arrayBuffer();//This returns an arrayBuffer. await Deno.writeFile('./someimage.jpg' , new Uint8Array(data))
{ "pile_set_name": "StackExchange" }
Zawieja Zawieja is a Polish surname. Notable people with the surname include: (born 1940), Polish Olympic sailor Martin Zawieja (born 1963), West German weightlifter Philippe Zawieja (fl. 1991–present), French psychologist Category:Polish-language surnames
{ "pile_set_name": "Wikipedia (en)" }
Q: How to call Perl 6 from Java? Perl 6 regexes/grammars are much better structured, more powerful and readable than Perl 5 or related Perl compatible regexes everywhere, including regexes in Java. I am looking for a way to execute Perl 6 code with that regex/grammar code from Java. Here is a common example similar I want to do: grammar Calculator { token TOP { [ <add> | <sub> ] } rule add { <num> '+' <num> } rule sub { <num> '-' <num> } token num { \d+ } } class Calculations { method TOP ($/) { make $<add> ?? $<add>.made !! $<sub>.made; } method add ($/) { make [+] $<num>; } method sub ($/) { make [-] $<num>; } } say Calculator.parse('2 + 3', actions => Calculations).made; # OUTPUT: «5␤» Maybe I have to write a Class in Perl 6 and have to compile this for JVM Bytecode and then I can call this. Is that a solution or not? Or is that not possible? Maybe it is too hard to call Perl 6 from Java. There is also another direction. In Perl 6 are lots of Inline modules like Inline::Python, Inline::Perl5 and so on. There is also a way to run java code in Perl 6. Here is an example I found: use java::util::zip::CRC32:from<java>; my $crc = CRC32.new(); for 'Hello, Java'.encode('utf-8') { $crc.'method/update/(B)V'($_); } say $crc.getValue(); Is this a possible way to start with Perl 6 and bind the mass of Java code then to one project? But how to go back from Java to my Perl 6 code? For Perl 5 I can find the module Inline::Java::Callback but not for Perl 6. How should I do this in a professional way? A: I share the results of my own experimentations and observations, in the hope it will be useful, even if my conclusion is not very positive at the moment. My short answer to the O.P.'s question is: as of may 2019, it is not yet possible. Now the long answer: the JVM backend support of Perl6 is not yet in a stable, ready-to-use state in latest releases of Rakudo Star:https://rakudo.org/post/announce-rakudo-star-release-2019-03 Anyway, if you want to try your luck, here is an example derived from rakudo-star/nqp/examples (with a small patch, for the original code from rakudo-star-2019.03 won't compile out of the box). Improvements to the original example also include the documentation and basic control of the command line arguments: package examples; import org.perl6.nqp.runtime.*; import static org.perl6.nqp.runtime.CallSiteDescriptor.*; import org.perl6.nqp.sixmodel.*; public class CallFromJava { private GlobalContext g; private ThreadContext t; private SixModelObject nqpComp; private CallFromJava(String bytecode, String hll) { g = new GlobalContext(); t = g.getCurrentThreadContext(); Ops.loadbytecode(bytecode, t); nqpComp = Ops.getcomp(hll, t); } private SixModelObject eval(String nqp) { Ops.invokeDirect(t, Ops.findmethod(nqpComp, "compile", t), new CallSiteDescriptor(new byte[] { ARG_OBJ, ARG_STR }, null), new Object[] { nqpComp, nqp }); Ops.invokeDirect(t, Ops.result_o(t.resultFrame()), Ops.emptyCallSite, Ops.emptyArgList); return Ops.result_o(t.resultFrame()); } public static void main(String[] args) { if (args.length != 3) { System.err.printf("usage: java CallFromJava <jarfile> <dialect> <expression>\n"); System.err.println("<jarfile>: path to nqp.jar or perl6.jar"); System.err.println("<dialect>: nqp or perl6"); System.err.println("<expression>: a nqp or perl6 expression"); System.exit(1); } String jarFile = args[0]; String dialect = args[1]; String expression = args[2]; CallFromJava nqp = new CallFromJava(jarFile, dialect); nqp.eval(expression); } } If you take the original code from the Rakudo Star package (version 2019-03 at the time of writing), make sure to apply the following correction (already fixed in the above example): < Ops.invokeDirect(t, Ops.findmethod(t, nqpComp, "compile"), --- > Ops.invokeDirect(t, Ops.findmethod(nqpComp, "compile", t), To build and test the example: With NQP (Not Quite Perl): cd rakudo-star-yyyy-mm/nqp javac -cp bin/ examples/CallFromJava.java java -cp nqp-runtime.jar:3rdparty/asm/asm-4.1.jar:3rdparty/asm/asm-tree-4.1.jar:. examples.CallFromJava nqp.jar nqp 'say(2+2)' 4 The problem is that NQP is only a subset of Perl6, not intended for direct use by the Perl6 developer. With complete Perl6, presumably, one would do something like: export PERL6_PREFIX=/usr/local/perl6 # or whatever your perl6 installation prefix is cd rakudo-star-yyyy-mm/nqp javac -cp bin/ examples/CallFromJava.java java -cp $PERL6_PREFIX/share/nqp/runtime/asm-4.1.jar:$PERL6_PREFIX/share/nqp/runtime/asm-tree-4.1.jar:$PERL6_PREFIX/share/nqp/runtime/nqp-runtime.jar:$PERL6_PREFIX/share/perl6/runtime/rakudo-runtime.jar:$PERL6_PREFIX/share/perl6/runtime/perl6.jar:. examples.CallFromJava $PERL6_PREFIX/share/perl6/runtime/perl6.jar perl6 'say 2 + 2' but I didn't manage to make it work so far Unhandled exception: java.nio.file.NoSuchFileException: Perl6/Grammar in <anon> (src/vm/jvm/ModuleLoader.nqp:76) in load_module (src/vm/jvm/ModuleLoader.nqp:58) in <anon> (gen/jvm/main.nqp) A: Compiling perl6 code to JVM bytecode won't immediately help you, I don't think, but there's an "Eval Server" that the test suite uses so that it doesn't have to start a JVM from scratch for each of the many test files in the spec test suite. You can find the source code to the eval server here, and probably steal a few things from it: https://github.com/perl6/nqp/blob/master/src/vm/jvm/runtime/org/perl6/nqp/tools/EvalServer.java
{ "pile_set_name": "StackExchange" }
Inhalers for delivering medicament to a patient by inhalation are known. Such devices include metered-dose inhalers (of both pressurised and dry-powder types). Metered-dose inhalers typically comprise a medicament-containing vessel and an actuator housing having a medicament delivery outlet in the form of a mouthpiece or nosepiece. The medicament-containing vessel may be a pressurized canister containing a mixture of active medicament and propellant. Such canisters are usually formed from a deep-drawn aluminium cup having a crimped lid which carries a metering valve assembly. The metering valve assembly is provided with a protruding valve stem which, in use, is inserted as a tight push fit into a so-called stem block in the actuator housing. Metered-dose inhalers may either be of the manually operable type or the breath-actuated type. For the manually operable type, the patient self-administers the medicament by manually pressing the closed end of the canister into the actuator housing to cause movement of the canister relative to its valve stem (which is fixed in the stem block of the actuator housing). This movement is sufficient to actuate the metering valve assembly of the canister, resulting in the pressurised contents of a metering chamber being vented through the stem, through the stem block and its exit jet and orifice, and causing the medicament to exit the mouthpiece or nosepiece as an aerosol mist. Simultaneously with this action, the patient inhales through the nosepiece or mouthpiece, entraining the aerosol mist in the inhaled stream of air. The patient then releases the depression force on the canister which, under the action of an internal valve spring, moves upward with respect to the valve stem, returning to its resting position. A more recent development is the so-called breath-actuated metered-dose inhaler, which serves to automatically displace the canister relative to its valve stem and release the contents of the canister's metering chamber in response to a patient's inspiration. The general purpose of such inhalers is to alleviate difficulties in coordinating actuation of the metering valve assembly with the patient's inspiration, and to provide for a maximal amount of medication to be drawn into the patient's lungs. A breath-actuated metered-dose inhaler is disclosed in WO 01/93933 A2. The actuator housing is generally regarded as an integral part of the medicament delivery system, since the design of the housing can greatly affect the form of the medicament generated for inhalation by the patient. The actuator housing of a metered-dose inhaler typically includes an air inlet means for producing an air flow through the actuator housing into which the medicament is released. Further, for breath-actuated inhalers, the air flow through the actuator housing typically operates or at least influences in some way the breath-actuated mechanism. Consequently, the actuator housing of such inhalers comprises air inlets designed to allow airflow through the housing. However, such air inlets exhibit the problem that they can be covered or occluded by the patient's hand or finger during use, thereby preventing or influencing the airflow through the actuator housing, with the result that the breath-actuated mechanism may malfunction. This problem is often exacerbated by the fact that the air inlets are provided on the actuator housing at positions which are convenient for handling the inhaler during use by the patient. Thus, there is a need in the art to provide improved airflow configurations for inhalers that are less susceptible to being occluded or blocked by the patient during use, while at the same time allowing for convenient and comfortable operation by the patient.
{ "pile_set_name": "USPTO Backgrounds" }
In his introduction to Superman: Secret Identity, Kurt Busiek writes that in a series, you can wander around, explore side-alleys, look at the same situation from different angles, indulge in slow development . . .until youve exhausted what you can do with [the characters] or the audience has abandoned you. With Frank Ironwine, Warren Ellis has boldly, if unusually, attempted to set down in one issue a whole series of character and situation explorations, to build a version of the archetypal detective by boiling him down to the essentials and shooting him through with eccentricity. His success is evidenced by the sensation one has on a second reading of this comic that surely, surely this isnt the first issue. Read Full Review
{ "pile_set_name": "Pile-CC" }
Lost on the Grand Banks Lost on the Grand Banks (1885) is one of several paintings by the American painter Winslow Homer (1836–1910) on marine subjects. Together with The Herring Net and The Fog Warning, painted the same year, it depicts the hard lives of North Atlantic fishermen in Prouts Neck, Maine. The painting was bought in 1998 by Bill Gates, the then chairman of Microsoft. Gates reportedly paid $30 million for the seascape, at the time a record price for an American painting. References Category:Paintings by Winslow Homer Category:1885 paintings Category:Maritime paintings
{ "pile_set_name": "Wikipedia (en)" }
Coral Gables, Fla. — Dajuan Coleman, facing Tuesday surgery to repair his left knee, traveled with his Syracuse team to Miami this weekend. The sophomore forward/center said he decided to have surgery after numerous attempts to repair the damage by other means did not work. In a brief conversation in the bowels of the BankUnited Center after Syracuse defeated Miami, Coleman said his physical health going forward outweighed his other options. "I think it's a great decision on my part just to get healthy," Coleman said. "Even though I'm going to miss (the rest of the season), I'm going to be healthy afterward. It's a process." The surgery will be Coleman's second in his two seasons at Syracuse. Doctors repaired the meniscus in his left knee last season, though that procedure did not erase the rest of the year for the Jamesville-DeWitt product. Coleman acknowledged the two-year turn of events was disappointing. "But you can't really think negative about it," he said. "You just gotta keep it positive." Dr. Bradley Raphael, who will perform the surgery, said last week that he and the SU medical staff will have a better understanding of Coleman's prognosis after the procedure is completed.
{ "pile_set_name": "Pile-CC" }
Q: TFS command line to get all mapping information of a specific workspace I am trying get all maping information for a specific workspace. When I try this command, it displays a dialog - which is not what I want. tf workspace myworkspace Is there a command that will get all the working folder information and output to the console? A: The following command displays the working folder mappings for the workspace in the current directory: C:\projects>tf workfold If you want to list the working folder mappings for a different workspace, you can specify the /workspace:workspacename parameter. C:\>tf workfold /workspace:My_Other_Workspace You can manipulate the workspace mappings using this command also. The following example maps the folder C:\DifferentWorkfold to the Team Foundation version control server folder $/projects/project_one c:\projects>tf workfold $/projects/project_one C:\DifferentWorkfold See Tf Command-Line Utility - Workfold Command on MSDN for more information
{ "pile_set_name": "StackExchange" }
Lively dance tracks from the ever-popular Glencraig Band, led by Nicol McLaren. A fine six-piece band with lots of ‘oomph’ and plenty of lift, featuring two accordions, fiddle, piano, bass and drums. This is the third album in a series of four, focussing this time on Scottish Country Dancing. Each of the different styles of Scottish Dance represented in this series (Ceilidh, Reeling, Scottish Country Dance and Old Time) requires a different nuance from the music to complement the dance. The Glencraig Band are masters of these styles and therefore popular with dance groups. One of the busiest Scottish Dance Bands on the scene, in recent times the Glencraig Band has been to Tbilisi, Munich, Brisbane and Melbourne. At home, one of their biggest gigs has been playing for a ceilidh at the British Grand Prix at Silverstone in 2009. They were winners of The National Association Of Accordion And Fiddle Clubs’ CD Of The Year award in 2007, and were nominated in the Best Scottish Dance Band category (for the second time) at the 2009 Scots Trad Music Awards. Instructions by June Templeman for all of the dances are given in the booklet. Monthly Newsletter enter your email address: You can easily unsubscribe here, or by using the link in any message, or by contacting us. Greentrax are clear and transparent about what data we collect from you, in line with GDPR guidelines. By submitting this form you consent to Greentrax using your contact details to send monthly email updates on new releases and special offers.
{ "pile_set_name": "Pile-CC" }
[Effects of electrical stimulation of lateral hypothalamic area on gastric ischemia-reperfusion injury in rats]. The effects of electrical and chemical stimulation and electrolytic lesion of lateral hypothalamic area (LHA) on gastric ischemia-reperfusion injury (GI-RI) were investigated in rats whose celiac arteries were clamped for 30 min and reperfused for 60 min by removal of the clamp. The results are as follows. (1) Electrical stimulation of LHA could aggravate GI-RI in an intensity-dependent manner by using 0.2, 0.4 or 0.6 mA current respectively. Microinjection of L-glutamic acid into LHA resulted in a similar effect to that of electrical stimulation of LHA on GI-RI. After electrolytic lesion of bilateral LHA, the area of gastric mucosal injury induced by gastric ischemia-reperfusion (GI-R) was smaller than that by electrical stimulation of LHA plus GI-R. (2) Dorsal vagal complex (DVC) lesion or vagotomy could eliminate the effect of electrical stimulation of LHA on GI-RI. (3) Electrical stimulation of LHA increased the content of malondialdehyde (MDA) but decreased the activity of superoxide dismutase (SOD) of ischemia-reperfusion (I-R) gastric mucosa. (4) Electrical stimulation of LHA plus gastric I-R increased gastric juice volume and total acid output, but there were no significant changes in acidity, pepsin activity and gastric barrier mucus. These results indicate that the LHA is an area in the CNS exerting aggravate effects on GI-RI. The DVC and vagus may be involved in the regulative effects of LHA on GI-RI. These effects are associated with increases in gastric mucosal MDA content, gastric juice volume, and total acid output, and a decrease in SOD activity.Acidity, pepsin activity and gastric barrier mucus do not seem to play an important role.
{ "pile_set_name": "PubMed Abstracts" }
Court Essay Slow Motion Video Makes People Look Guiltier March 2nd, 2017 Video recording, including hidden cameras, is an admissible tool that helps to provide the evidence in court. Audio and video materials help the court to establish the circumstances under which the crime was committed. Jury watches video footage of crimes to identify whether the person is guilty or not and such videos help to analyze and define the events that took place and their seriousness. Video and audio evidence can be made in many different ways, due to household tape recorders, cell phones or surveillance system. But the validity of this evidence will be only accepted by court on condition that the authenticity of the recording itself was proven. Along with the advantages of slow-motion videos, there are also disadvantages that cause bias in court. Proceedings of National Academy of Sciences, the United States of America released a research according to which the viewers watching a slow-motion video of the committed crime considered such crime to be thought through and calculated, but not impulsively committed. The viewers were asked to watch two types of videos: slow-motion video and a regular speed video. The participants of the experiment who watched the slow-motion video version believe in premeditation of the committed crime. There were also other experiments carried out to prove the bias nature of slow-mo video type. There were 489 participants in the experiment who were shown the video with the armed robbery where the clerk was shot. Watching the slow-mo video of the event, but not at regular speed, the viewers believed that the wrongdoer intended to kill the clerk. In another experiment the participants were shown a video with a football tackle with the involvement of the forbidden helmet-to-helmet trick. The viewers of slow-motion video state that the player who did the helmet-to-helmet hit acted intently towards the other player. The question now arises of whether the jury must use slow-motion videos during court proceedings or due to its bias nature, videos of such type must be banned for good?! Such bias does a serious damage to the accused party and influences the whole court process in general. Premeditated crimes get more serious sentence and punishment accordingly and it does matter when you are charged with reflexive second-degree murder or the first-degree murder. Slow-motion videos can give a false impression that the actions were planned by the person. It does not necessarily mean that slow-motion videos must not be used and accepted in court, but it does mean that the benefit of such video types may cause serious consequences to the guilty party. Slow motion videos can make boring moments look funny ones and unseen things visible ones. Videos of such type can make not guilty person a guilty one. It may seem not serious, but when it comes to returning the verdict and punishment, it is more than serious, it is of vital importance as for the footage evidence. Of course, it doesn’t refute the fact that the person who committed a crime will have to answer for his actions, but at least the verdict he receives will be fair.
{ "pile_set_name": "Pile-CC" }
Q: Table Cell Height - Is there some minimum size one must adhere too? The following is a 3 x 3 table: <html> <body style="overflow:hidden;"> <style> div.table_div {overflow:scroll;width:378px;height:117px;position:relative;} table.TheTable {width:361px;height:100px;table-layout:fixed;border-collapse:collapse;border:1px solid #000000;} td.TheTableColumnCell {text-overflow:ellipsis;overflow:hidden;white-space:nowrap;border:1px solid #000000;} </style> <div class="table_div" id="table_div"> <table class="TheTable"> <tr id='firstTr'> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> </tr> <tr> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> </tr> <tr> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> <td class="TheTableColumnCell">&nbsp;</td> </tr> </table> </div> </body> </html> Of note: a) The table width is 361px because: 119px for each cell (119 x 3 = 357). Then add 4px for the four borders. b) The table height is 100px because: 32px for each cell (32 x 3 = 96). Then add 4px for the four borders. c) The div tag. For my windows settings, the scrollbar width/height is 17px. This is why you see the div width and height 378px (361 + 17) and 117px (100 + 17). The purpose for the div is in case I actually have to scroll (in my 3x3 example, I don't really have to). In summary: This is a 3x3 fixed-width/height table. If you cut-and-paste that code into an html file and open it in your browser, you'll see the scrollbars, but neither will actually allow you to scroll (because I have the widths/heights set to show everything without scrolling). The problem: Do the following: Instead of making the height of each cell 32px, make them 16px. So, the height for table.TheTable should be changed to 52px (instead of 100px). The height for div.table_div should be changed to 69px (instead of 117px) - again, my scrollbar height is 17px. The vertical scrollbar, while always visible, now scrolls. I can't fit everything now? But I thought my calculations were correct? The magic number seems to be 22px. Less than 22px (height) I get a scrollbar that scrolls. Greater than 22px, I get a scrollbar that doesn't scroll. If the div tag is correctly calculated, the scrollbars shouldn't scroll. How do I avoid that for heights less than 22px? EDIT: The overflow:scroll is necessary. I pulled this HTML out of a larger set a code. In the real code the table is much larger. The div tag allows me to show a certain amount of cells, and I can scroll to see the rest. It's when I started playing with height's less than 22px, that's when I discovered the issue. As it messes with some other things I'm trying to do. A: Your Problem has now become the font size/line height: http://jsfiddle.net/EZ6q2/1/ You will need to reduce font size: http://jsfiddle.net/EZ6q2/2/ or Line Height http://jsfiddle.net/EZ6q2/3/ In Response To Your Comment White space is still content and will count as text on a line. Completely emptying the cell also fixes the problem in Fire Fox. Be wary of this as some browsers my completely collapse empty cells (I've experienced this in some older browsers, IE 6 perhaps). http://jsfiddle.net/EZ6q2/5/ A Little More Follow Up The "padding" you refer to is the line height. From what I understand from the specs this is determined by metrics contained within the font its self and by default is a proportion of the font size, which is why changing the font size works. Changing the line height gives you an absolute result. See this fiddle
{ "pile_set_name": "StackExchange" }
Tucker (Ga.) linebacker James Vaughters, a Stanford commitment, had an opportunity to watch Monday's Orange Bowl, in which the Cardinal destroyed Virginia Tech, 40-12. He's also heard plenty about his future coach at Stanford bolting town. Vaughters weighs in on the Stanford situation.
{ "pile_set_name": "Pile-CC" }
Protein kinases (PKs) regulate diverse biological processes including cell growth, survival, differentiation, organ formation, morphogenesis, neovascularization, tissue repair, and regeneration, among others. Protein kinases also play specialized roles in a host of human diseases including cancer. Cytokines, low-molecular weight polypeptides or glycoproteins, regulate many pathways involved in the host inflammatory response to sepsis. Cytokines influence cell differentiation, proliferation and activation, and can modulate both pro-inflammatory and anti-inflammatory responses to allow the host to react appropriately to pathogens. Signaling of a wide range of cytokines involves the Janus kinase family (JAKs) of protein tyrosine kinases and Signal Transducers and Activators of Transcription (STATS). There are four known mammalian JAKs: JAK1 (Janus kinase-1), JAK2, JAK3 (also known as Janus kinase, leukocyte; JAKL; and L-JAK), and TYK2 (protein-tyrosine kinase 2). Cytokine-stimulated immune and inflammatory responses contribute to pathogenesis of diseases: pathologies such as severe combined immunodeficiency (SCID) arise from suppression of the immune system, while a hyperactive or inappropriate immune/inflammatory response contributes to the pathology of autoimmune diseases (e.g., asthma, systemic lupus erythematosus, thyroiditis, myocarditis), and illnesses such as scleroderma and osteoarthritis (Ortmann, R. A., T. Cheng, et al. (2000) Arthritis Res 2(1): 16-32). Deficiencies in expression of JAKs are associated with many disease states. For example, JAK1−/− mice are runted at birth, fail to nurse, and die perinatally (Rodig, S. J., M. A. Meraz, et al. (1998) Cell 93(3): 373-83). JAK2−/− mouse embryos are anemic and die around day 12.5 postcoitum due to the absence of definitive erythropoiesis. The JAK/STAT pathway, and in particular all four JAKs, are believed to play a role in the pathogenesis of asthmatic response, chronic obstructive pulmonary disease, bronchitis, and other related inflammatory diseases of the lower respiratory tract. Multiple cytokines that signal through JAKs have been linked to inflammatory diseases/conditions of the upper respiratory tract, such as those affecting the nose and sinuses (e.g., rhinitis and sinusitis) whether classically allergic reactions or not. The JAK/STAT pathway has also been implicated in inflammatory diseases/conditions of the eye and chronic allergic responses. Activation of JAK/STAT in cancers may occur by cytokine stimulation (e.g. IL-6 or GM-CSF) or by a reduction in the endogenous suppressors of JAK signaling such as SOCS (suppressor or cytokine signaling) or PIAS (protein inhibitor of activated STAT) (Boudny, V., and Kovarik, J., Neoplasm. 49:349-355, 2002). Activation of STAT signaling, as well as other pathways downstream of JAKs (e.g., Akt), has been correlated with poor prognosis in many cancer types (Bowman, T., et al. Oncogene 19:2474-2488, 2000). Elevated levels of circulating cytokines that signal through JAK/STAT play a causal role in cachexia and/or chronic fatigue. As such, JAK inhibition may be beneficial to cancer patients for reasons that extend beyond potential anti-tumor activity. JAK2 tyrosine kinase can be beneficial for patients with myeloproliferative disorders, e.g., polycythemia vera (PV), essential thrombocythemia (ET), myeloid metaplasia with myelofibrosis (MMM) (Levin, et al., Cancer Cell, vol. 7, 2005: 387-397). Inhibition of the JAK2V617F kinase decreases proliferation of hematopoietic cells, suggesting JAK2 as a potential target for pharmacologic inhibition in patients with PV, ET, and MMM. Inhibition of the JAKs may benefit patients suffering from skin immune disorders such as psoriasis, and skin sensitization. The maintenance of psoriasis is believed to depend on a number of inflammatory cytokines in addition to various chemokines and growth factors (JCI, 113:1664-1675), many of which signal through JAKs (Adv Pharmacal. 2000; 47:113-74). Thus, new or improved agents which inhibit kinases such as JAKs are continually needed for developing new and more effective pharmaceuticals that are aimed at augmentation or suppression of the immune and inflammatory pathways (such as immunosuppressive agents for organ transplants), as well as agents for the prevention and treatment of autoimmune diseases, diseases involving a hyperactive inflammatory response (e.g., eczema), allergies, cancer (e.g., prostate, leukemia, multiple myeloma), and some immune reactions (e.g., skin rash or contact dermatitis or diarrhea) caused by other therapeutics. The compounds of the invention, as well as its compositions and methods described herein are directed toward these needs and other ends.
{ "pile_set_name": "USPTO Backgrounds" }
Pediatrician, Delhi Personal Statement Hello and thank you for visiting my Lybrate profile! I want to let you know that here at my office my staff and I will do our best to make you comfortable. I strongly believe in ethics; a......more Hello and thank you for visiting my Lybrate profile! I want to let you know that here at my office my staff and I will do our best to make you comfortable. I strongly believe in ethics; as a health provider being ethical is not just a remembered value, but a strongly observed one. More about Dr. Surinder Jeet Arora Dr. Surinder Jeet Arora is a popular Pediatrician in Saket, Delhi. You can visit him at Saket City Hospital in Saket, Delhi. Book an appointment online with Dr. Surinder Jeet Arora and consult privately on Lybrate.com. Lybrate.com has an excellent community of Pediatricians in India. You will find Pediatricians with more than 27 years of experience on Lybrate.com. You can find Pediatricians online in Delhi and from across India. View the profile of medical specialists and their reviews from other patients to make an informed decision. How many times she passes stools and what is the type of stools are the questions. Still let me tell you 10 to 15 times also sometimes we consider as normal. Most important thing is what is the general condition and how is the babie's mood. Are there any other symptoms e. G. Rash around anusara or continuous cry etc.
{ "pile_set_name": "Pile-CC" }
Press Room Two members of the U.S. Senate Intelligence Committee have asked President Barack Obama to declassify the full version of the committee’s nearly 7,000-page torture report. Physicians for Human Rights (PHR) today joins that call and says that full disclosure of the CIA torture program’s details are necessary to ensure that such illegal and harmful practices are never employed again. Physicians for Human Rights (PHR) today condemned the Egyptian government’s decision to freeze the bank account of the El-Nadeem Center for Rehabilitation of Victims of Violence, a prominent group that treats and provides assistance to victims of torture. Physicians for Human Rights (PHR) today confirmed airstrikes against four separate hospitals in Syria on Sunday and Monday of this week. All within 20 miles of one another, the four facilities sustained significant damage after coming under air fire by either Syrian government forces or their Russian allies. PHR continues to condemn the ongoing assault on Syria’s medical personnel and facilities and calls for renewed vigilance on the part of the entire international community to end such crimes. In the latest story regarding the United States’ torture program, The New York Times today revealed new details about the inadequate mental health care provided to Guantánamo detainees tortured by the CIA and Defense Department. Press Release A Turkish court today postponed criminal proceedings against Dr. Şebnem Korur Fincancı, president of the Human Rights Foundation of Turkey and a longtime partner and colleague of Physicians for Human Rights (PHR). Earlier this year, Dr. Fincancı and her co-defendants, Erol Önderoğlu and Ahmet Nesin, were accused of disseminating “terrorist propaganda” for taking part in a solidarity campaign with a newspaper critical of Turkey’s government. Her case will resume January 11, 2017.
{ "pile_set_name": "Pile-CC" }
Adeno-associated virus (AAV) is a replication-deficient parvovirus, the genome of which is about 4.6 kb in length, including 145 bp inverted terminal repeats (ITRs). Two open reading frames encode a series of rep and cap polypeptides. Rep polypeptides (rep78, rep68, rep62 and rep40) are involved in replication, rescue and integration of the AAV genome. The cap proteins (VP1, VP2 and VP3) form the virion capsid. Flanking the rep and cap open reading frames at the 5xe2x80x2 and 3xe2x80x2 ends are the 145 bp ITRs, the first 125 bp of which are capable of forming Y- or T-shaped duplex structures. Of importance for the development of AAV vectors, the entire rep and cap domains can be excised and replaced with a therapeutic or reporter transgene [B. J. Carter, in xe2x80x9cHandbook of Parvovirusesxe2x80x9d, ed., P. Tijsser, CRC Press, pp.155-168 (1990)]. It has been shown that the ITRs represent the minimal sequence required for replication, rescue, packaging, and integration of the AAV genome. When this nonpathogenic human virus infects a human cell, the viral genome integrates into chromosome 19 resulting in latent infection of the cell. Production of infectious virus and replication of the virus does not occur unless the cell is coinfected with a lytic helper virus, such as adenovirus or herpesvirus. Upon infection with a helper virus, the AAV provirus is rescued and amplified, and both AAV and helper virus are produced. The infecting parental ssDNA is expanded to duplex replicating form (RF) DNAs in a rep dependent manner. The rescued AAV genomes are packaged into preformed protein capsids (icosahedral symmetry approximately 20 nm in diameter) and released as infectious virions that have packaged either + or xe2x88x92 ss DNA genomes following cell lysis. AAV possesses unique features that make it attractive as a vector for delivering foreign DNA to cells. Various groups have studied the potential use of AAV in the treatment of disease states; however, progress towards establishing AAV as a transducing vector for gene therapy has been slow for a variety of reasons. One obstacle to the use of AAV for delivery of DNA is lack of highly efficient schemes for encapsidation of recombinant genomes and production of infectious virions [See, R. Kotin, Hum. Gene Ther., 5:793-801 (1994)]. One proposed solution involves transfecting the recombinant adeno-associated virus (rAAV) containing the transgene into host cells followed by co-infection with wild-type AAV and adenovirus. However, this method leads to unacceptably high levels of wild-type AAV. Incubation of cells with rAAV in the absence of contaminating wild-type AAV or helper adenovirus is associated with little recombinant gene expression. In the absence of rep, integration is inefficient and not directed to chromosome 19. A widely recognized means for manufacturing transducing AAV virions entails co-transfection with two different, yet complementing plasmids. One of these contains the therapeutic or reporter transgene sandwiched between the two cis acting AAV ITRs. The AAV components that are needed for rescue and subsequent packaging of progeny recombinant genomes are provided in trans by a second plasmid encoding the viral open reading frames for rep and cap proteins. However, both rep and cap are toxic to the host cells. This toxicity has been the major source of difficulty in providing these genes in trans for the construction of a useful rAAV gene therapy vector. Other methods have been proposed to enable high titer production of rAAV. For example, U.S. Pat. No. 5,658,776 refers to packaging systems and processes for packaging AAV vectors that replace the AAV P5 promoter with a heterologous promoter. Alternatively, U.S. Pat. No. 5,622,856 refers to constructs and methods for AAV vector production, which provide constructs formed by moving the homologous P5 promoter to a position 3xe2x80x2 to the rep genes, and optionally flanking the rep-cap and repositioned P5 promoter with FRT sequences. There remains a need in the art for additional methods permitting the efficient production of AAV and recombinant AAV viruses for use in research and therapy. The present invention provides novel methods, host cells, and vector constructs which permit efficient production of rAAV, by decreasing the expression of the rep78/rep68 gene products, while leaving the expression of rep52, rep40and AAV structural proteins at a normal level. In one aspect, the invention provides a host cell containing (a) a first nucleic acid molecule comprising from 5xe2x80x2 to 3xe2x80x2, a parvovirus P5 promoter, a spacer, an AAV rep sequence and an AAV cap sequence, wherein the spacer is of sufficient size to reduce expression of the rep78 and rep68 gene products; (b) a second nucleic acid molecule comprising a minigene comprising a transgene flanked by AAV inverse terminal repeats (ITRs) and under the control of regulatory sequences directing expression thereof in a host cell; and (c) helper functions essential to the replication and packaging of rAAV. In another aspect, the invention provides a nucleic acid molecule useful in the production of recombinant AAV comprising from 5xe2x80x2 to 3xe2x80x2, a homologous P5 promoter, a spacer, an AAV rep sequence and an AAV cap sequence, wherein the spacer is of sufficient size to reduce, but not eliminate, expression of the rep78 and rep68 gene products. In yet a further aspect, the invention provides a method for increasing the production of recombinant adeno-associated virus (rAAV) by culturing a host cell as described above, by which the rep78/rep68 gene products are reduced in expression, and isolating from the cell lysate or cell culture, high levels of recombinant AAV capable of expressing said transgene. Other aspects and advantages of the present invention are described further in the following detailed description of the preferred embodiments thereof.
{ "pile_set_name": "USPTO Backgrounds" }
Information, discussion & community! Monday Night Forum!! Occupy Forum is an opportunity for open and respectful dialogue on all sides of these critically important issues! OccupyForum presents... Brutal and Unequal: Disruption, Precarity and the New Tech Boom with Darwin Bond-Graham and Ryan Smith "The tech sector," said sociologist and writer Darwin Bond-Graham, "has obtained a strategic power over the rest of the economy ... Flows of income and distributions of wealth have been equally transformed by the rise of the tech-centric economy, as by the rise of finance." The ideology of the new tech boom is disruption, "a code word for forms of sabotage that benefit a few monopolizing corporations," Bond-Graham said. A key to fighting back against disruption is to understand what it is and how it functions: Bond-Graham will use the ridesharing phenomenon as a focal point for a discussion of how the industry uses disruption to " ... extract wealth from billions of workers and consumers across the planet." What happens when the whiz kids of high tech concentrate on inventing ever more airtight forms of exploitation? With all of the brainpower they have to throw at the engineering puzzles of exploiting others, how can ordinary workers hope to resist? Ryan Smith, longtime Occupy activist, former tech employee and a member of the Industrial Workers of the World (IWW) will discuss his experiences in workers' struggles and in the tech industry. The Wobblies are famous for "unionizing hundreds of thousands of workers previously regarded as 'unorganizable.'" Can precarious, isolated workers benefit from IWW techniques in the coming century? Time will be allotted for Q&A, discussion and announcements. Donations to Occupy Forum to cover costs are encouraged; no one turned away!
{ "pile_set_name": "Pile-CC" }
Flawed Algorithms Are Grading Millions of Students’ Essays - elorant https://www.vice.com/en_us/article/pa7dj9/flawed-algorithms-are-grading-millions-of-students-essays ====== dahart > Utah has been using AI as the primary scorer on its standardized tests for > several years. “It was a major cost to our state to hand score, in addition > to very time consuming,” said Cydnee Carter, the state’s assessment > development coordinator. The automated process also allowed the state to > give immediate feedback to students and teachers, she said. Yes, education takes time and costs money. Yes, not educating is both cheaper and faster. Note how the rationalizing ignores the needs of the students and the quality of the education. I live in Utah and my children have been subjected to this automated essay scoring here. One night I came home from work and my son and wife were both in tears, frustrated with each other and frustrated with the essay scoring which refused to give a high enough score to meet what the teacher said was required, no matter how good the essay was. My wife wrote versions herself from scratch and couldn’t get the required score. When I got involved, I did the same with the same results. Turns out the instructions said the essay would be scored on verbal efficiency; getting the point across clearly with the fewest words. I started playing around and realized that the more words I added, the higher the score, whether they were relevant or grammatical or not. Random unrelated sentences pasted in the middle would increase the score. We found a letter of petition online for banning automated scoring for the purposes of grades or student evaluation of any kind. It was very long, so it got a perfect score. I encouraged my son to submit it, and he did. Later I visited his teacher to explain and to urge her to not use automated scoring. She listened and then told me about how much time it saves and how fast students get feedback. :/ ~~~ piokoch Frankly, I can't believe what I am reading. The idea that some "AI" grades essays automatically is idiotic and has nothing to do with education. Where is the place for discussions? Where is the place for ideas confrontation? Where is the place for writing style development? How this AI is supposed to grade things like repetitions (that can be either good rhetorical tool or a mistake, depending on context), etc? Who the hell came out with such an idea. I would even hesitate to use "AI" for automatic spell checking as it is sufficient to give some character unusual name and it will be marked as error. My guess is that soon or later people will learn how to game that AI. I wouldn't be surprised if there were some software that will generate essay that Utah "AI" likes. ~~~ dagw _My guess is that soon or later people will learn how to game that AI._ Already been done. [http://lesperelman.com/writing-assessment-robo- grading/babel...](http://lesperelman.com/writing-assessment-robo- grading/babel-generator/) Here's a sample essay that is complete nonsense and got a perfect score on the GRE. [http://lesperelman.com/wp- content/uploads/2015/12/6-6_ScoreI...](http://lesperelman.com/wp- content/uploads/2015/12/6-6_ScoreItNow_2015_Feb20.pdf) ~~~ thombat The final paragraph from that example is steaming gibberish that nobody could mistake for English: "Calling has not, and undoubtedly never will be aggravating in the way we encounter mortification but delineate the reprimand that should be inclination. Nonetheless, armed with the knowledge that the analysis augurs stealth with propagandists, almost all of the utterances on my authorization journey. Since sanctions are performed at knowledge, a quantity of vocation can be more gaudily inspected. Knowledge will always be a part of society.Vocation is the most presumptuously perilous assassination of mankind." Yet the robo-scoring acclaims it as: * articulates a clear and insightful position on the issue in accordance with the assigned task * develops the position fully with compelling reasons and/or persuasive examples * sustains a well-focused, well-organized analysis, connecting ideas logically * conveys ideas fluently and precisely, using effective vocabulary and sentence variety * demonstrates superior facility with the conventions of standard written English (i.e., grammar, usage, and mechanics) but may have minor errors Any teacher faced with the requirement to use such tools would be better placed instructing their class on civil disobedience. ~~~ crankylinuxuser The let me posit another idea... There's 2 ways of finding out these artifacts of AI essay grading: pure luck, and being able to afford extensive test-prep (rich). The luck one can't be accounted for. So I im lead to believe that the purpose of these essays and their AI grading is to find and escalate rich people. ~~~ JustSomeNobody > So I im lead to believe that the purpose of these essays and their AI > grading is to find and escalate rich people. Well, of course. How many poor people are allowed to decide what is good for children's education? ~~~ crankylinuxuser The standard US response is: 'There's a reason why they're poor. Better pull themselves up by the bootstraps." Mixed alongside with poverty stricken neighborhoods are the primary funding, resulting in poor school systems. And those students obviously wont have the money or the access to get the test-prep needed to "succeed". It's all too laid out to be accidental. ------ VikingCoder My mother worked grading standardized tests. It was a hellish job for many reasons (limited breaks, etc.) One question she had to grade was essentially, "What's something you want your teacher to know about you?" It was an essay answer, and she was supposed to grade it on grammar, etc. Just the mechanical aspects of writing. (The real question explained the details more, but that was the core of the question.) She saw answers that would make you weep. "My daddy touches me." "I haven't eaten today. I don't know when I'm going to eat again." Stuff like that. And my mother was going to be the only human who ever saw their responses. Their teacher had no chance to see their responses, just my mom. So she goes to her supervisor and asks, "What can we do to help these kids?" The supervisor said there was nothing you can do. Just grade the answers. ~~~ harry8 Some of these will be 100% true as well. But don't make the mistake that there are no kids who go for shock value or are wantonly manipulative when they know it can't come back to them. So how many are true and how many false? I have no clue. Literally none. And no it doesn't make me feel any better about the screams of existential agony even if that were a low percentage. Could be high too. ~~~ dmoy For the not eating, it's pretty easy to get data. It's like 1 in 5 children live in food-insecure households in the US and maybe 1 in 20 of those very insecure, so not eating before school provided lunch is common enough that if you're grading tons of papers you'll run into kids like that. ~~~ stochastic_monk It could also be a student suffering from anorexia nervosa, which the confessional aspects of the essay would fit well with. ~~~ JustSomeNobody I'm confident that your example would of a less percentage than those mentioned in dmoy's comment. ------ drngdds This is my first time learning that AI-graded essays are a thing. Am I the only one who thinks that's insane? I feel like you'd probably have to have an AGI to meaningfully evaluate an essay. ~~~ Spivak In a forum of CS people I'm surprised this is one of the top opinions. Our field is full of super surprising results like this -- that you don't have to actually understand the text at beyond basic grammar structures to reasonably accurately predict the score a human would give it. Like this kind of thing should be _cool_ , not insane. I mean wasn't it cool in your AI class when you learned that DFS could play Mario if you structured the search space right? ~~~ cmroanirgo I came first in English for my school, many moons ago. Leading up to the finals, I regularly finished ahead of the hard core the English essay people, generally to my amusement. My exam essay responses were generally half the length (sometimes even shorter) than the prodigious writers. Although I've an ok vocabulary, I always made sure I made the right choice of word to hit a specific meaning, rather than choosing words with a high syllable count. I'd find it highly interesting to see what kind of result I'd get using an automated system. Why? Because, I once asked a teacher (also an examiner) why I got good grades above the others, and the answer surprised me: my answers were generally unique /refreshingly different, to the point/ not too long and easy to read. I suspect with this new system, I'd be an average student. It'd also be interesting to find out, several years down the road, if the automated system could be gamed at all -- I suspect it could, and teachers would help students 'maximise' their scores as a result of that. ~~~ rocqua It seems plausible that, under this system, you would eventually have learned to write longer essays. To my mind, that would be a school teaching you to be worse. In fact, throughout the article I kept being surprised by the idea that long is good. When writing, I tend to prefer being brief. ------ RcouF1uZ4gsC Unlike a multiple choice test where the primary audience is automated graders, the primary audience for an essay is other humans. If even Google and Facebook with their billions of dollars and billions of posts worth of data, still cannot always understand the intent and purpose of written content, what hope do these algorithms have? If it is cost-prohibitive for every essay to be graded by humans, then they should be dropped from the tests. Otherwise, we are missing the whole point of essays which is to communicate effectively with another human, not just match certain text patterns. ~~~ anigbrowl If it is cost-prohibitive then then maybe we should adjust the economic model, not abandon the measurement. ~~~ rocqua Sure, have less essay test questions, and start grading them for content not form. If you want to grade on form to test the ability to write correct rather than coherent sentences, make those separate questions, and mark them so. ------ jakear “In most machine scoring states, any of the randomly selected essays with wide discrepancies between human and machine scores are referred to another human for review”. And “between 5 to 20 percent” of essays are randomly selected for human review. So the takeaway is that if you’re one of the 80-95% of (typically black or female) people who the machine scored dramatically lower, but are not selected for human review, your education future is systematically fucked and you have no knowledge of why or how to change it. Absolutely reprehensible. Anyone involved in the creation or adoption of these systems should be ashamed. ~~~ kazinator The thing is, you could be similarly screwed by a biased human whose grading is not checked by a less biased human. At least the machines offer the following hope: even if unbiased humans are rare among paper-grading teachers, those humans can be used to train the machines, so then bias-free or lower-bias grading becomes more ubiquitous. Basically, the system has the potential for systematically identifying and reducing systematic bias. A computer program can be retrained much more readily than nation-wide army of humans. Humans can be given a lecture on bias, and then they will just return to their ways. ~~~ gibolt AI has a lot more potential for bias than humans. It depends on the input data which is likely heavily biased based on other data set results like face detection. It will only amplify any small bias present in the data. ~~~ Spivak It's amazing to see how the general opinion of CS people has _completely shifted_ in the last few years from "algorithmic scoring is important in removing the bias from human graders" to the exact opposite. ~~~ kazinator If we can quantify the bias in the machine, that gives us an opportunity to close the feedback loop and control the bias. The bias comes from the human-generated training data in the first place; the machine isn't introducing its own. For instance, the machine has no inherent concept of disparaging someone's language because it's from an identifiable inner city dialect. If it picks up that bias, at least it will apply it consistently. When we investigate the machine, the machine will not know that it's being investigated and will not try to conceal its bias from us. On the other hand, eliminating bias from humans basically means this: producing a new litter of small humans and teaching them better than their predecessors. ~~~ gibolt If... ------ rynomad Personal anecdote; I remember taking a standardized test, can't remember if it was SAT or CSAT (Colorado pre-SAT test). This was at a time when I'm confident that humans were the graders. I started with an intro that would be appropriate for a standard 5 paragraph essay; i.e. the thing you write when you don't know what you're talking about and you're just following a format. In the third paragraph I took a leaf from family guy, and just interjected "WAFFLES, NICE CRISPY WAFFLES, WITH LOTS OF SYRUP." for the next page and a half, I berated the very foundation of the essay prompt, insulting it the way only an angst ridden early teen can. ... I got a 98% on the essay. Fast forward several years. I write an essay for for an introductory college course final. My paper is returned to me with a coffee stain and a "94% - good work!" note scribbled on the top. That note was scribbled by a TA that would turn out to be my girlfriend for 2 years. One night in bed, she tilts her laptop to me, showing an article that I used as the central theme to the above essay; "can you believe this?" "Are you joking? Of course I can believe this, it was the subject of the essay you gave me an A on 2 years ago" She admits she didn't read past the first paragraph of anything she grades, and just bases grades on intuition based on how articulate the essays are at the outset. ... The point I'm making: Does AI suck at judging the amount of informative content in a student essay? YES Do humans suck at judging the amount of informative content in a student essay? ALSO YES ------ dlkf This is a great example of why it's grossly irresponsible for members of the ML community to talk about how AGI is just around the corner. In addition to the fact that we have no idea whether this is true, it primes a naive public for believing that technologies like this are worth the tradeoff. "People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world." ------ empath75 I imagine that any student that experimented with the form of the essay or wrote an exceptionally well argued piece in simple language would not have their test graded appropriately either. Any essay writing test which could be adequately graded by a machine is not testing anything of value. Edit: I’ll further add that as soon as people’s careers depend on a metric, the metric becomes useless as a metric, because it will be gamed and manipulated by everyone involved. Almost nobody involved is incentivized to accurately measure student’s writing ability. ~~~ HarryHirsch _Almost nobody involved is incentivized to accurately measure student’s writing ability_ It's the same reason you see keyword posters in math education. "Together" means "plus", that kind of thing. It's completely worthless, except for one- step problems, and even then it doesn't always work. What is happening is collusion between teachers and testmakers. You can't teach understanding, but you can teach test-passing techniques because the way the test is set permits this. You see the same thing here, in English you can get away with not teaching quality writing if you teach techniques to score well. ~~~ Spivak I feel like the mistake is assuming that essay writing is about the content. It's just a thing to give the student something barely non-trivial to write about. When your essays are graded they're marked down for mechanical and wording problems. There's really no point in trying or grade 'good ideas' on a subject piece you had maybe 10 minutes to skim. ~~~ HarryHirsch _a subject piece you had maybe 10 minutes to skim_ That's a travesty, and you know it because when the kids are in college and they have as much time as they like to write their assignments they all use the wrong words and then misapply them. ------ anm89 To me this brings up the absurdity of having essays on standardized tests. What about an essay is standardized? It's a totally nonsensical premise. This always gets made into some kind of techluminati conspiracy for the machines to ingrain structural racism whereas it's pretty clear all the algorithms fail to do is improve an already bad situation stemming from a flawed premise. ~~~ nitwit005 A number of states found out their schools were graduating students who genuinely could not read or write effectively. If you want to quantify that, you're forced to test it somehow. How would you test writing ability without asking them to write something? ~~~ anm89 Reading comprehension with simple factual questions. ------ jedberg Any state that relies on the AI as the primary grader does not understand the current state of AI. It would make sense to use the AI as a first pass, and then not _randomly_ grade the essays with a human, but specifically choose all the essays that are on the cusp of the pass fail line. Then use all those human generated scores to update the model, especially if someone moves from pass to fail or fail to pass. Then maybe throw in a few of the really high and really low outliers to make sure _those_ are right, and throw away your entire model if the human scores are drastically different (and obviously don't tell the humans what the computer score was so they have no idea if they're reading a "cusp" essay or an outlier essay). But putting the educational fate (and therefore future earnings) in the hands of an AI is unconscionable. ~~~ C1sc0cat But I bet the company took the decision makers to a really nice restaurant _nudge_ _nudge_ ------ bendbro I think machine learned grading of papers is insane, but at the same time I don't think we should be training or encouraging students to speak in AAVE (as the article suggests). I think the right approach for machine learned systems is to automatically "whitelist" essays rather than "blacklisting" them. Students in the middle of the distribution of essays aren't really interesting, so whitelist them, give them a pass. Those at the extremes can be either exceptional or terrible, but usually terrible. The judgement of those at the extremes should be decided by a human, not a machine. You wouldn't want to blacklist the Einstein of essays because he did something genius that is indistinguishable from insanity. However, I think there are some essays that can automatically be blacklisted. For example, those with: 1\. Plagiarism (perhaps human moderated) 2\. Extremely low word count 3\. Extremely high count of fake words And at the end of the day, these essay assignments aren't there to judge whether a student is the next writing sensation; they are given to judge whether the student can write legible sentences and words, to ensure they are prepared for the future. So perhaps it is at least possible to automatically blacklist on sentence structure and spelling (you should just lose points for invalid structure or invalid words, you shouldn't gain points for big words or complicated sentences). To make this fair, the student should be informed of this requirement. If they are informed and still fail, then they need to be remediated. If we discover that a disproportionate number of minorities are getting blacklisted, then we should investigate why the school is failing to teach them proper sentence structure and spelling, not pretend we can change the world to make AAVE an acceptable dialect of english in the workplace. ------ ironSkillet The underlying problem is that reading essays with a careful critical eye is _not_ scalable. But another issue this highlights is the complete misalignment of incentives of the people who greenlit the adoption of this technology. Because educational outcomes are much harder to evaluate over the course of a bureaucrat's tenure than budget sizes (longer time horizon and many exogenous variables), there is a natural inclination to make decisions that reduce costs as long as they don't have any _obvious_ (to them or their superiors) adverse outcome for students. This is a pretty low bar, especially so given that most bureaucrats do not have the background necessary to evaluate technical solutions. ------ userbinator I've heard stories from others in the industry of companies using tools like this on their _human-facing_ documentation and requiring a certain score from them. Imagine using Microsoft Word's spelling and grammar checker, not being able to add or override its decisions (without following an extremely lengthy and bureaucratic process), and being required to have less than X "defects" per 100 words. Naturally, this results in documentation that is perfectly grammatical and free of spelling errors, but verbose, full of unusual phrasing, and next to useless for its actual purpose of informing a human. Grading students' code using a machine is not such a bad idea in contrast, because in that case there is [1] no exceptions possible in a programming language, [2] the machine (compiler) has to understand it anyway, and [3] it does save time verifying correctness. But communication in a human language really needs to be assessed by humans. Anyone who thinks "AI" can accurately assess human language is either severely delusional, or trying to make $$$ from it. ------ robinwassen I am working with reducing the time teachers spend on exams and assessments. I have access to a cleaned and manually scored dataset of 550k essays that is exponentially growing. Looked at creating at a model based on this dataset to automatically score essays with NLP parameters such as grammar, structure, spelling, word complexity, sentiment, relative text length etc. The problem that I encountered was actually how to apply it in a useful way, since the problems mentioned in the article are quite obvious when you design the model. Options that I saw: 1\. Use it as autonomous grading with optional review by the teacher, see the linked article for the problems with this. 2\. Use it as a sanity check on the teachers manual scoring, but it would not reduce the work load and probably just undermine the teacher. Do you have any suggestions for how such a model could be applied in a practical and ethical way? Had some thoughts on how to measure actual knowledge about a subject, but that would require a massive knowledge graph which would introduce a huge amount of complexity just to see if it would be a feasible approach. ~~~ na_ka_na Here are some thoughts: 1\. Instead of grading, maybe you can use it for training, tutoring. If a student is learning to write essays, I'm assuming it's hard for them to get any feedback. 2\. But then there's probably not enough money to be earned there. One trick might be to write an independent AI to summarize the essay back and see how closely it matches the essay title. This might weed out gibberish essays with sound English sentences. ------ choeger Such a stupid application of technology. It looks as if learning is completely out of fashion nowadays. First of all, complaining about minorities getting lower grades because their English is not as sophisticated as that of others is the inversion of the idea of teaching. That feedback is actually great. We have machines that can give that feedback (e.g., grammarly)? Then use it to make everyone's writing better. Grades are just a measure of the success of learning, after all. I never got why one would not allow a student to repeat a particular test as often as they like, tbh. Second, grading essays this way is a clear violation of the idea of teaching. _What_ do you want the students to learn? Structure? Knowledge transfer? Grammar? Writing an essay is such a complex task it is a really too broad goal. And then naturally grading becomes quite difficult. ------ amirmasoudabdol While this is already terrible, I’m aware of a few project that are trying to do the same with scientific literature. Basically they are trying to train models for scoring literatures based on their quality, novelty and what not. At the current rate and state of AI, I cannot ever imagine this is going to work. It was a few weeks ago that someone shared “The Dark Age of AI” on HN [1]. I think we are promising way over what Drew McDermott thought we would not going to promise. This is to the extend that we are applying AI on assessing Art, Creativity and even quality and novelty of Science, something that in a way we don’t even understand (or trying to understand) ourselves at the time that we are publishing it. [1]: [https://news.ycombinator.com/item?id=20546503](https://news.ycombinator.com/item?id=20546503) ------ amatecha Grading... algorithms... for essays? How/why is that even a thing? That's absolutely insane. You can't grade someone's writing skills using algorithms. That is totally counter to providing a proper education. My mind is officially boggled. ------ colechristensen Quality of educational is proportional to quality of evaluation. Evaluation of how well someone follows arbitrary language conventions is worse than useless. I only got to university English 101 outside of some technical writing in the engineering department, but I have to say none of my education in writing was worth anything past elementary school. It is perhaps one of the most difficult things to teach and evaluate, to be fair, but I feel like I am missing a huge chunk of my education and general ability because of it. I can't write or form an argument particularly well, rambling on HN and the like is the closest thing to education I have had. Prescriptive language rules are not entirely useless. That is the best you can say about them. ------ waynecochran I would like to see how it scored on essays by great writers. “Sorry Mr Tolkien, I’m afraid you have to go to community college first.” ~~~ analog31 In my state, it's going to be "Sorry Mr Tolkien, but we eliminated all of the departments that are not STEM enough." ------ rkagerer I'm normally pretty open minded but this is just stupid. AI is nowhere near literate enough for this task. What kind of world is it when humans create merely for the consumption of machines. The product of our creativity deserves better. I would support any student who refuses to consent to their work being used in this fashion. ------ lopmotr I wish this machine bias wasn't always presented in such divisive terms as race and "disadvantaged groups". It can affect anybody. If you happened to develop a writing style that looks like typical bad essay writers' style, then you could be hurt by bias in the grading. ~~~ fzeroracer If an image processing algorithm fails to recognize black people or worse, profiles them, how else should this be described but in terms of race? If you don't talk about the actual problem, how can you possibly expect to solve it? ~~~ lopmotr There are many classes of people who have problems of discrimination. Short, ugly, ginger, etc. The intersections of all those classes are so numerous that everybody will have some disadvantage. But it won't be apparent unless you define their class and measure it. ~~~ crooked-v That's just substituting in smaller or harder-to-define minority groups, though. ------ Meekro From the article: " _All essays scored by E-rater are also graded by a human_ and discrepancies are sent to a second human for a final grade. Because of that system, ETS does not believe any students have been adversely affected by the bias detected in E-rater." ~~~ shkkmo Also from the article: > Of those 21 states, three said every essay is also graded by a human. But in > the remaining 18 states, only a small percentage of students’ essays—it > varies between 5 to 20 percent—will be randomly selected for a human grader > to double check the machine’s work. So that applies only in a minority of cases. ~~~ Meekro Oops, my mistake! That's worse than I thought! ------ inlined > the engines also focus heavily on metrics like sentence length, vocabulary, > spelling, and subject-verb agreement... The systems are also unable to judge > more nuanced aspects of writing, like creativity. This reminds me of a wonderful essay/speech by Stephen Fry on the harm done by pedantry. I also feel that schools focus so much on a single structure of essay writing and similarly take the joy out of language. [https://youtu.be/J7E-aoXLZGY](https://youtu.be/J7E-aoXLZGY) ------ danharaj This is a natural development of industrialized education. Treating children as individual thinkers would require for more resources and manpower than our system would like to provide. ------ rdtwo Bringing SEO mentality to standardized testing what could go wrong. ------ readme Absolute garbage. Kids would be better educated by reading and posting on HN than they would by attending English classes in one of the states that uses these tools. ------ MisterBastahrd Here's a thought: if classwork and homework is getting so overwhelming that teachers can't possibly grade all of it, then it's overwhelming for the STUDENTS too, and they shouldn't freaking be assigning so much busywork. You don't need a 5 page essay to determine whether a kid has read a book. You can figure that out really quickly in a classroom discussion without anyone having to lift a pencil. ~~~ dragonwriter > Here's a thought: if classwork and homework is getting so overwhelming that > teachers can't possibly grade all of it, then it's overwhelming for the > STUDENTS too There's no necessary connection there, especially if one of the reasons that teachers are being overwhelmed is that the teacher/student ratio is increasing. > You don't need a 5 page essay to determine whether a kid has read a book. No, you need it to determine whether a student has (1) read and understood a book well enough to apply structured thought to the contents and (2) has developed the writing skills to write a 5-page essay. Determining whether a student read a book is rarely, on its own, of significant interest in school. ------ mrarjen Reminds me of the plagiarism checker they had at my partners university, they would check identical words on specific subjects... Meaning every word in any order, so naturally there is a high % of overlap not only with quotes but also the words used regarding the subject, the teacher would take this literally as "you did not write this yourself" if 10% of words would be similar. Don't think anyone passed that class. ------ auggierose I can't believe that anyone would try to automatically grade essays. This is either deeply cynical or astonishingly dumb. ------ nyxtom Good lord what a terrible design. Rather than determine if the writer has a coherent understanding and a complex prompt, the system grades based on writing patterns. This is actually my biggest fear of AI. Deploying wide scale systems like this that have very clear flaws ------ lmilcin I live in Poland and it is the first time I hear about it. I am absolutely apalled. Not even at the idea of grading by algorithm, but by the fact that many, many people had to cooperate to make this happen. ------ wedn3sday You say "flawed algorithm," I saw "easily exploitable by intelligent students." ------ Smithalicious I don't even think _I_ would be qualified to grade essays, let alone an algorithm! ------ pauljurczak Teachers talk back and even may unionize! Crappy AI is cheap and can't unionize. ------ nostrademons It seems like these accumulated errors in the educational system and filters needed to get through it would create a market inefficiency that could be exploited by a firm willing to ignore degrees, grades, and test scores and judge for themselves whether a candidate can do the job they're being hired for. ------ gerbilly Why are we even bothering to discuss this on this site? Wouldn't it be better and less biased if we each wrote our own AI systems and had them discuss with each other instead? (And we should publish our training data as well, of course) ------ kwhitefoot Why are algorithms grading essays in the first place? ------ 40acres The sooner we get it out of our heads that this education system of ours is a meritocracy the closer we’ll get to actually creating a quality universal system. ------ bayesian_horse What are teachers but flawed Algorithms? ------ nyxtom It is becoming increasingly evident that the hubris for implementing AI is what is going to ruin everything. ------ crispcarb Because a (likely unsophisticated) algorithm is grading the essays, there's probably a deterministic method to do score well. This seems like a terrible idea. It's not a stretch to imagine the opportunity for nefarious behavior this allows - think of the recent college admission scandals, and how happy they'd be to have a guise of algorithmic indifference'. If used long-term, it could offer a big advantage to the wealthy in other avenues. Another hypothetical, probably not far from reality: the algorithm becomes solved (almost or completely) by some premier 'tutoring' company. Said company can charge a pretty penny given its stellar track record, offering yet another hidden advantage to the wealthy/elite. ~~~ aidenn0 Surely there's a deterministic method to score well on the math questions? ~~~ crooked-v An essay is to a math problem as a proof is to a grammar problem. ~~~ aidenn0 There's definitely a deterministic way to score well on HS level proofs. Also, I think you are overestimating the requirements for an essay on a standardized test.
{ "pile_set_name": "HackerNews" }
We improve banking. From the heart of Europe. Who we are. BSC has provided solutions to financial institutions since its launch in 1990. We are among the leaders in the development and delivery of SW solutions for financial institutions. In the area of FrontOffice systems we have been celebrating the success in Europe and Asia with the multi-channel bank solution my|GEMINI, in the area of regulatory reporting we have built up a dominant position on the domestic market with solutions of the product line my|BI and in the area of BackOffice systems we have been a long-term and reliable partner of banks with the my|BOS product line. Mainly, these are all products and interfaces for the MIDAS banking system, connection to payment systems and systems for the management of payment cards. 50new What We Offer. In the area of FrontOffice systems we have been celebrating the success in Europe and Asia with the multi-channel bank solution my|GEMINI, in the area of regulatory reporting we have built up a dominant position on the domestic market with solutions of the product line my|BI and in the area of BackOffice systems we have been a long-term and reliable partner of banks with the my|BOS product line. How do we work. Inspiration First of all, we are happy to listen to you, discuss all your needs, plans and goals. We will inspire you with examples of various ways of implementation. We will show you our products running live and options for implementation at reference customers. We will present interesting possibilities and trends! Proposal Based on your needs and goals, via our long-term experience with a team of professionals, we propose a solution built on the outstanding products of BSC. The solution is unique, taking into full account the specific customer requirements and conditions. Deliverable Implementation of the deliverable is an iterative process based on an agile approach. We take small steps, so-called sprints. Visible outputs at regular intervals are at your disposal. You will be able to participate in the definition, regular testing and optimizing of outputs. In combination with the product approach we try to minimise risks connected with quality, delivery dates and the final price of the deliverable. Results. Thanks to the shared inspiration at the start of the collaboration, a unique solution is designed built on the foundations of our products and an agile approach to the deliverable, which combines these two important benefits. The product will ensure that you quickly receive what has already been created. The inspiration combined with an agile approach give you and your clients added value, providing exactly what you need, with no compromise. The results speak for themselves. Check out the selected reference stories.
{ "pile_set_name": "Pile-CC" }
Transistors, such as metal oxide semiconductor field-effect transistors (MOSFETs), are the core building block of the vast majority of semiconductor devices. Some semiconductor integrated circuits, such as high performance processors or processing units, can include billions of transistors. For such devices, decreasing transistor size, and thus increasing transistor density, has traditionally been a high priority in the semiconductor manufacturing industry. A FinFET is a type of transistor that can be fabricated using very small scale processes. FIG. 1 is a simplified perspective view of a FinFET 10, which is formed on a semiconductor wafer substrate 12. A FinFET is named for its use of one or more fins 14, which are formed from the semiconductor material of the substrate 12. As shown in FIG. 1, each fin 14 extends between a source region 16 and a drain region 18 of the FinFET 10. The FinFET 10 also includes a gate structure 20 that is formed over and across the fins 14. The surface area of the fins 14 in contact with the gate structure 20 determines the effective channel of the FinFET 10. Semiconductor materials suitable for creating FinFETs include, but are not limited to silicon, germanium, silicon-germanium alloys, and III-V materials such as GaAs, InGaAs, and InP.
{ "pile_set_name": "USPTO Backgrounds" }
In 1978 the membership of the Newcomers Club decided to create an award to honor an individual who had contributed service “above and beyond the call of duty”. The first recipient of this award and the inspiration for it was Carol Fields. Thus, the award became known as “The Carol Fields Award”. This honor is not intended to be given every year, but rather, when an individual is deemed to have earned it through several years of extraordinary service to the club. The Carol Fields Award Past Recipients of The Carol Fields Award 2005 Jeanne Engle 2006 Cynthia Bottomly 2008 Barbara Rappaport 2009 Brenda Bedi 2011 Sharon Kornberg 2012 Kathy Christy 2013 Nancy Grimm 2014 Jan Webster 2015 Barbara Meighan This year the committee voted to award one of our members who has been very active in the club for the last several years. She attends Socializers, Bridge, and almost all General Meetings. Her smiling face and friendliness to others is apparent whenever you see her. She has served on the board for 5 years. She has always been willing to host meetings or activities at her home, has offered to carpool, or pick members up at their home. On the board, she has held various positions including Website Administrator and Directory Chair, Treasurer and Publisher. As treasurer she spent countless hours ensuring that the monthly reports were accurate, was always prepared to answer any questions the board or membership had, and provided important information by tracking numbers from previous years. She also shared any of her concerns about the budget when necessary. But maybe her biggest accomplishment and contribution to Newcomers was her initiative and skills that first created the website for our club. The website has been a wonderful tool for the current members and has also brought many new members into the club. This was an important milestone in our Club’s history. I’m sure that you all know who I am speaking about. I would like to present Barbara Meighan with the Carol Fields Award this year. Congratulations Barbara!!
{ "pile_set_name": "Pile-CC" }
Q: What happend to the close votes on this question? Recommendations for Photo Editing/Organization Software? This question had 4 close votes(as noted by me in the comments) on 03/07/12. Now it has none. Did a moderator close/unclose to clear the votes? Or how else can they be removed? Why was this done? I don't see any note from a moderator or others if this was an action taken by someone. Maybe after a certain time they get cancelled if the required 5 are not voted? A: Close votes age away and expire as part of a regular process. A close vote will expire after four days. However, if the question has less than 100 views, the close votes will not age until it crosses that 100 views threshold. For more details, see this. You can only cast a single close vote and a single reopen vote per question.
{ "pile_set_name": "StackExchange" }
Q: invalid command name "" I have trouble with getting this script to accept e.g. https://youtu.be/HPP0yB-_blA, https://www.youtube.com/watch?v=HPP0yB-_blA works though. The first example just leads to invalid command name "". # URL title parse script for Eggdrop. # # Based on https://github.com/teeli/urltitle by teel. # # Version log: # 0.11 Minor site specific tweaks. # 0.1 First version. # # Usage: # .chanset #channelname +urltitle ;# Enable script. namespace eval urltitle { # Configuration variables. set delay 1 ;# Minimum number of seconds to wait between uses. set length 5 ;# Minimum character length of URL to trigger usage. set timeout 5000 ;# Geturl timeout in milliseconds (1/1000ths of a second). # Internal variables. set ignoredSites {apina.biz} ;# Sites to ignore when parsing URLs. set last 1 ;# Stores time of last usage. set scriptVersion 0.11 ;# Script version number. # Binds/Hooks. bind pubm - "*://*" urltitle::handler setudef flag urltitle ;# Channel flag to enable script. # Required packages. package require http package require tdom package require tls proc socket {args} { set opts [lrange $args 0 end-2] set host [lindex $args end-1] set port [lindex $args end] ::tls::socket -autoservername true {*}$opts $host $port } proc handler {nick host user chan text} { set time [clock seconds] variable delay variable ignoredSites variable last variable length if {[channel get $chan urltitle] && ($time - $delay) > $last} { foreach word [split $text] { if {[string length $word] >= $length && [regexp {^(f|ht)tp(s|)://} $word] && \ ![regexp {://([^/:]*:([^/]*@|\d+(/|$))|.*/\.)} $word]} { foreach site $ignoredSites { if {![string match *$site* $word]} { set last $time # Enable HTTPS support. ::http::register https 443 [list urltitle::socket] set title [urltitle::parse $word] # Disable HTTPS support. ::http::unregister https # Sends text to the server, like 'putserv', but it uses a different queue intended for sending messages to channels or people. puthelp "PRIVMSG $chan :$title" break } } } } } return 1 } proc parse {url} { set title "" variable timeout if {[info exists url] && [string length $url]} { if {[catch {set http [::http::geturl $url -timeout $timeout]} results]} { putlog "Connection to $url failed" } else { if {[::http::status $http] == "ok" } { set data [::http::data $http] if {[catch {set doc [dom parse -html -simple $data]} results]} { # Remove HTML comments. regsub -all {<!--.*?-->} $data {} data # Remove everything except <head></head> content. regexp -nocase {<head>.*?</head>} $data match #regsub -nocase {.*?<head>} $data {} data #regsub -nocase {</head>.*?} $data {} data regexp -nocase {<title>(.*?)</title>} $data match title #set title [regsub -all -nocase {\s+} $title " "] set title [string trim $title] } else { set root [$doc documentElement] set title [string trim [[$root selectNodes {//head/title[1]/text()}] data]] $doc delete } } else { putlog "Connection to $url failed" } http::cleanup $http } } return $title } putlog "URL title parser v$scriptVersion" } Does anyone why this happens? I think the problem is set title [urltitle::parse $word] but I can't get it right. A: The problem is formally in code you've not shown, urltitle::parse, since your patterns correctly match both URLs. A good way to determine if that is actually true is to just try running little bits of code in an interactive shell. I'm guessing that the actual problem is that the youtu.be URL generates an HTTP redirect to the other URL (or one very much like it); Tcl's http library doesn't process redirects for you — it'd be a higher-level layer on top (and if this is the source to the urltitle code then I can see that it isn't doing it) — and the result causes something to choke in a nasty way. If you're just wanting to support these youtu.be urls, you can the rewrite yourself with regsub immediately before passing the URL into urltitle::parse: ... regsub {^https?//youtu\.be/([^?/]*)$} $word {https://www.youtube.com/watch?\1} word set title [urltitle::parse $word] ... That regsub is carefully guarded so it won't transform anything it shouldn't, but this approach isn't scalable; you can't introduce your own rewrite rule for every website out there! Instead, it needs to handle the various redirects correctly for you. That's an actual bug in the urltitle code.
{ "pile_set_name": "StackExchange" }
Value Filldown RouterID (\d+(\.\d+){3}) Value Filldown LocalAS (\d+) Value RemoteAS (\d+) Value Required RemoteIP (\d+(\.\d+){3}) Value Uptime (\S+) Value Received_V4 (\d+) Value Received_V6 () Value Status (\D.*) Start ^BGP router identifier ${RouterID}, local AS number ${LocalAS} ^${RemoteIP}\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Received_V4} -> Next.Record ^${RemoteIP}\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Status} -> Next.Record EOF
{ "pile_set_name": "Github" }
Network Oncology (NO)--a clinical cancer register for health services research and the evaluation of integrative therapeutic interventions in anthroposophic medicine. Concepts of integrative oncology (IO), as have been offered by anthroposophic medicine (AM) for decades, are gaining increasing interest and acceptance. Central aspects are multimodal therapeutic interventions, health-related quality of live, and patients' preference as well as therapeutic relationship and clinical outcome. Despite its broad application, IO lacks evaluation in clinical practice and complementary therapies are not monitored by any cancer registries. To close this gap we established 'Network Oncology' (NO), a conjoint registry of German outpatient AM practitioners and AM hospitals. In this paper we present the project and a first data overview and compare it to epidemiological registers and current literature. NO has collected 10,405 cancer patients' records in 6 years. Compared to epidemiological registers our data show minor differences in disease entity distribution, age, and gender. There is an overproportional amount of young breast cancer patients in NO institutions indicating a demand for integrative therapies in this group. There is no difference between the UICC (Union for International Cancer Control) stages at first diagnosis and at admission to a NO facility. According to our data conventional therapies were less frequently administered after admission to a NO facility. Nevertheless, one third of the patients received their first conventional therapy in a NO facility. 80% of the patients received mistletoe preparations and 63% had nonpharmacotherapeutic, complementary interventions. Integrative oncological approaches attract a great number of patients visiting AM institutions. The NO provides an infrastructure to evaluate integrative interventions in AM, allows comparison to other clinical registers, and thus can contribute to health service research in this field.
{ "pile_set_name": "PubMed Abstracts" }
This invention relates to a ventilation hood with a safety system for use with a cooktop. More particularly, the invention relates to a ventilation hood with a safety system designed to substantially reduce the possibility of a fire occurring in the ventilation hood and ductwork thereof, as well as reducing humidity resulting from steam generated by the operation of the cooktop. The invention further relates to a combination of a ventilation hood and cooktop system, as well as a method of operation of a ventilation hood and a cooktop. A number of ventilation hood control units are known for reducing the spread of smoke resulting from cooking operations on cooktops, as well as for removing humidity caused by steam resulting from cooking on the cooktop. One known system provides a control or regulating device for a stove which activates, deactivates, controls and regulates the heat energy of cooking zones of the stove in dependence upon the resulting cooking steam. The control device and corresponding sensor of such a system is installed in the ventilation hood associated with the stove. Such a system is primarily focused on controlling the level of steam detected, to control operation of the cooking zones and not the ventilation fan. The makers of the system list as one of its advantages achieving a substantial savings of energy. Another prior art system proposes a smart circuit device for a smoke exhauster for cooking. The circuit device includes a sensing circuit for sensing temperature and smoke. The motor of the fan and the exhauster is controlled to operate at a rotation speed conducive to reducing noise and save energy. The fan speed is varied in response to the quantity of smoke and is controlled by a fuzzy logic controller. Yet still another system for a commercial or institutional kitchen provides that the volume rate of a cooking exhaust may be increased to improve the general comfort, health and safety conditions in the kitchen and the rest of the facility. More particularly, such a system senses a parameter in the ambient air environment such as temperature and/or gas level. Depending on the activity of the cooking units, the air control system causes the exhaust system to increase the volume rate to a higher volume rate to exhaust more air from the ambient air environment, thereby reducing the temperature in the facility to improve comfort and reduce load on an high volume air conditioning (HVAC) system. While all of these systems provide advantages in reducing ambient smoke and/or steam for the purpose of providing a comfortable environment for persons using a cooktop, these conventional systems still fall short in providing an optimized arrangement designed to minimize fires occurring in ventilation hoods and cooktops. More particularly, the use of cooktops in an incorrect manner contrary to a manufacturer's instructions can cause a fire. Many current gas cooktops have burners which can operate at energy levels of greater than 15,000 BTUs. Such cooktops include four to six burners and the simultaneous operation of multiple ones of these burners for a long period of time can overheat ventilation elements exhaust ducts. The overheating of ventilation elements exhaust ducts is particularly of concern in circumstances in which such ventilation hoods and elements in ducts have accumulated oils and fat in the duct tubes thereof as such oils and fats are entrained with gases and/or vapors being drawn through the ventilation hood duct during cooking operations. If the heat conditions above the cooktop exceeds certain parameters such as may occur, for example, as a result of a flame, or through use of many of the high BTU burners at one time, a substantial portion of the heat generated may be drawn into the duct system and cause a fire as a result of, among other reasons, the ignition of the oils or fat accumulated in the duct tubes. In accordance with the invention, there is provided a ventilation hood with a safety system, a combination of a ventilation hood with a cooktop and a method of controlling operation of a ventilation hood and cooktop, which avoids the problems of the previously discussed conventional systems, and which substantially reduces or eliminates the danger of fire occurring in the duct work of the ventilation hood as a result of operation of the cooktop.
{ "pile_set_name": "USPTO Backgrounds" }
Stephen backs local business In a letter to local businesses, Stephen Morgan, Labour’s candidate for Charles Dickens, values the contribution of local businesses to the city and calls for their support in improving life chances for residents across Portsmouth. In the letter, being delivered to local shop owners and business leaders in the ward, this week. Stephen says: “As a local person – and with experience of working alongside businesses to encourage economic wellbeing – I have always wanted to get involved in politics to make Portsmouth even better. This year I’m delighted to get that chance. I don’t believe the Tory council nor the current Lib Dem ward councillor are doing enough to help our local area. They’ve remained too quiet on how they could help the local economy or our shopping areas. As a local business you play a huge part in the success and vibrancy of communities across our great city: providing local services to our neighbours; encouraging local growth; and adding to community employment. Communities across Charles Dickens ward, face huge challenges. I want to have a constructive and regular dialogue with local businesses so we can work together to make life better for residents.It’s time we had strong representation in the ward and a councillor who will put local residents and businessesfirst.” For more information on Stephen’s campaign visit: www.stephenjmorgan.org
{ "pile_set_name": "Pile-CC" }
Q: CSS Transition on newly created psuedo element I want to create a fade effect on a pseudo element but am having difficulty as i cannot use javascript on this element. I have an example of something similar to what i am trying to do here but i cannot get the element to fade in as transitions do not seem to work when the element is created. https://codepen.io/anon/pen/wrxPXJ .hoverhere.dim::before { content: '\A'; position: absolute; width: 100%; height:100%; top:0; left:0; background:rgba(0,0,0,0.8); opacity: 1; transition: opacity 1s; } I am adding a class to a div so that the pseudo element is created after matching with the above css however cannot work out how to animate this. I can probably get it to work without psuedo elements like below: https://codepen.io/anon/pen/Oxwzvv However was wondering if there is a way without changing my markup to include an empty div. A: I guess you're saying you want this? $('.hoverhere') .mouseenter(function() { $(this).addClass('dim'); }) .mouseleave(function() { $(this).removeClass('dim'); }); .hoverhere { width: 100%; height: 40px; background: red; color: white; text-align: center; } .hoverhere::before { content: '\A'; position: absolute; width: 20%; height: 100%; top: 0; left: 0; background: rgba(0, 0, 0, 0.8); opacity: 0; visibility: hidden; transition: all 1s ease-out; } .hoverhere.dim::before { opacity: 1; visibility: visible; transition: opacity 1s ease-out; } <script src="https://code.jquery.com/jquery-2.2.4.js"></script> <p>RANDOM TEXT HERE</p> <div class="hoverhere">HOVER ON ME</div> <p>MORE RANDOM TEXT HERE</p> What it needed was to have a starting point established for the opacity. If this is just for hovering, you don't need the JS at all. .hoverhere { width: 100%; height: 40px; background: red; color: white; text-align: center; } .hoverhere::before { content: '\A'; position: absolute; width: 20%; height: 100%; top: 0; left: 0; background: rgba(0, 0, 0, 0.8); opacity: 0; visibility: hidden; transition: all 1s ease-out; } .hoverhere:hover::before { opacity: 1; visibility: visible; transition: opacity 1s ease-out; } <p>RANDOM TEXT HERE</p> <div class="hoverhere">HOVER ON ME</div> <p>MORE RANDOM TEXT HERE</p>
{ "pile_set_name": "StackExchange" }
Community-based coalitions are integral to the National Cancer Institute's strategy for disseminating evidence-based interventions to reduce the burden of cancer in the United States, especially in underserved areas. As collaborative organizations, coalitions offer great promise in marshalling the combined resources of member agencies, businesses, and private citizens toward sustainable health promotion in culturally and community specific ways. At the same time, the discretionary nature of their members' participation poses the constant threat of disengagement. Thus, an understanding of what supports member participation is vital to ensuring the sustained capacity of community-based coalitions to engage in cancer control activities. This study will test a conceptual model of the factors affecting member participation in community-based coalitions and will examine how participation leads to specific prevention activities. The investigators will pursue this agenda through previously collected data from cancer-prevention coalitions within the Appalachia Cancer Network. They will then compare both constructs and patterns of association to those in a sample of Communities That Care coalitions focusing on adolescent behavior problems, many of which also increase cancer risks. The result will be a model that has greater specificity and generality than has been achieved in previous studies. The new model will enable us to see (1) how to measure factors potentially related to participation within community-based health promotion coalitions; (2) what coalition-level attributes affect individual members' participation; and (3) how participation levels support prevention activities at the coalition level. This study will lay the groundwork for subsequent research establishing a mid-range theory of community-based health promotion coalition participation.
{ "pile_set_name": "NIH ExPorter" }
Q: What is the benefit of wrapping a method call in a new object? I saw this code snippet in some company internal project, which really puzzle me, and nobody else knows who&&why wrote it like that. private void setDomStorageEnabled(final boolean isEnalbe) { new Object() { public void setDomStorageEnabled() { mWebView.getSettings().setDomStorageEnabled(isEnalbe); } }.setDomStorageEnabled(); } Anybody has an idea on this? A: This code creates an anonymous inner class and instantiates it, to call a method on that instance. A method which could as well be called directly. There is no point in doing this, it doesn't add any value. To the contrary, it obviously only creates confusion amongst readers. Thus the simple answer is: you should throw that thing out and go with private void setDomStorageEnabled(boolean isEnalbe) { mWebView.getSettings().setDomStorageEnabled(isEnalbe); } instead. ( I also removed the pointless final, as that also doesn't add any value here. Beyond that, mWebView could be called webView, when sticking to java naming conventions matters to you. )
{ "pile_set_name": "StackExchange" }
Q: How did the Antarctic Plateau lose the kings' names it originally had? The Antarctic Plateau was originally named by its explorers: Shackleton named it the King Edward VII Plateau, after the king of England. Amundsen named it the King Haakon VII Plateau, after the king of Norway. Usually the original names for Antarctic features have stuck, but this feature now has a different name. How did this happen? Who renamed it? When did the new name take hold? A: Originally, there was a lot of confusion about Antarctic nomenclature, with many different countries making different claims and names at different times. For example, this is one notice from 1912: The plateau around the South Pole was named by Amundsen after King Haakon VII. Sir Ernest Shackleton points out, very, very politely, that Amundsen must have done this inadvertently. Sir Ernest says, in commenting on the trip: “Here I would like to point out that Amundsen, in taking possession and in planting the flag at the South Pole and naming the plateau after King Haakon VII., must, I presume, be unaware of the fact that we, on our expedition, named the same plateau after King Edward VII., an error on his part in nomenclature which he will, no doubt, remedy when he is aware of the facts. Amundsen replies, also very politely, that Sir Ernest is mistaken in supposing that his plateau is the one that holds the South Pole. The Edward VII. plateau and the King Haakon VII. plateau are not one and the same. The controversy may possibly develop into a bitter one, since the boundaries of each plateau must necessarily be unknown at the present time. -- Current Literature, April, 1912 In 1928 Admiral Byrd started a series of explorations to remap the continent. He side-stepped the issue and simply labeled the area as the "Polar Plateau" on his maps. Later, by the time of the International Geophysical Year, it was realized that "Antarctic Plateau" was necessary because "Polar Plateau" is ambiguous (which pole?)
{ "pile_set_name": "StackExchange" }
Thursday, October 15, 2009 An Interview with Richard Billingsley Of the six computer ratings in the BCS formula, the Billingsley Report is unquestionably the most controversial. And of the seven programmers (Anderson & Hester count as two), Richard Billingsley is undoubtedly the most opinionated and colorful. The Billingsley Report has been part of the BCS since its second season, 1999, but the data goes all the way back to 1869, to Princeton vs. Rutgers, the first college football game ever played. With the 2009 BCS Standings set to debut on Sunday, the Guru decided to have a chat with Mr. Billingsley this week in a no-holds-barred, hour-long phone interview. This is what he had to say: Guru: Why are you a college football fan? Billingsley: I can honestly say it came by naturally. I was born into a football-crazy family. My grandfather and parents were big college football fans and I remember being a fan as a young child at 5-6 years old. I'm literally a fan before the game was on TV. It really boggles my mind that college football has gone from being on the radio to black-and-white TV to to color TV and now to every outlet you can think of. The first football game that left an impression on me was in 1957, it was Notre Dame beating Oklahoma that ended (the Sooners') 47-game winning streak. I was born in Oklahoma and still live here. I remember the entire state in such shock over that loss, literally like someone had died in the family. That made such an impact on me because I figured if it's that important to adults, there must be something to it, so I was hooked. Guru: How did you get into the rankings business? Billingsley: I was interested in college football rankings, such as the AP and the UPI - the coaches poll back in the old days - at a very young age. Back then, the polls came out on Tuesdays. The voting was done so late on Sundays it didn't meet press deadlines, so the rankings did not come out until Tuesdays in the papers. I'd run home from school looking at those polls in the newspaper and most of the time, I'd go 'man, did these guys watch the same games as I did? This is not right. There has to be a better way of ranking teams.' Even when I was a teenager, I thought they were not paying attention to strength of schedule and playing favorites with traditional powerhouse schools. I didn't think these should be tolerated. So I sat down one day and started tinkering with my own mathematical formula. Guru: When did you first publish your rankings? Billingsley: I first ran it for a couple of years, in 1968 and '69. In 1970 it was published for the first time, by a local neighborhood newspaper in Houston. I did that for years. It was just a hobby, something I printed out for friends, family and coworkers. It wasn't until 1994, when I started wondering who'd been No. 1 in my system back in the 1920s, and I thought I could find out, and run my system through those years if I had all the information I needed. So I sat down and wrote letters to every Division I school's SID (Sports Information Director), asking for their football records as far back as they have them. And every SID responded and sent me their press guide. That was my starting place. What I did not realize I'd come up against was that there's a vast amount of discrepancies between schools - not only did they not agree on the date, but the score, or in one instance, who won the game. But eventually I pieced everything together like a jigsaw puzzle. I started in 1869, everything from that point through the current year. I finally finished it in 1996 - it took me two years to do all that. One of my friends would tell me: 'you're either the most dedicated college football fan or you lead the most pathetic life of any man.' Guru: And how did your rankings get into the BCS? Billingsley: When I finished the project, I was pretty pround of what I had done and I took the results and mailed it to Richard Campbell, the director of statistics for the NCAA. He said it was amazing and he'd like to publish this in the NCAA records book. It was in the 1996 records book, and it was the first time ever I had gotten any recognition. And then the real break came in 1999, when Roy Kramer (then BCS chair and commissioner of the SEC) wanted to expand the computer rankings from three to seven. His deputy Charles Bloom called me and said we got your name from the NCAA and would like to see your work. And for the first time, my rankings were included in 1999. The fact is, in the six computers, there are the most time-tested brilliant mathematical minds - everybody but me, I'm not a mathematician - but one of the requirements is the BCS needed to see 10 years of rankings on a week-by-week and see how far back can you go. (Bloom) was amused that I had such a vast amount of information. How I ended up getting in the BCS was probably because they were impressed with my research. Guru: You had to alter your formula after the BCS required that margin of victory (MOV) be dropped as a component. How did you feel about that? Billingsley: That happened after the 2001 season, but I had actually agreed to take (MOV) out before the BCS requested us to take it out. It had become so obvious to me during the 2001 season that the coaches were using the scores of the game to manipulate the computers. It was the most unsportsmanlike thing I've ever seen in my life and I wanted no part of it. I agree that using MOV gives you a better predictor for future games, a more accurate predictor. But with it or without it, it has never changed my top two teams at the end of the season. Guru: Your system tends to produce rankings quite distinct from the other five BCS computers. Why is that? Billingsley: My system is probably more different from the other computer systems. The other five guys are looking at it from a purely mathematical standpoint - don't get me wrong, I applaud their systems and I have tremendous respect for what they do. But my system is not purely mathematically based. My rankings are based on rules that are put in place from a fan's perspective, things I think that are important to rank college football teams. My rankings are closer-ly related to human voters, an improved AP poll, if you will. It reacts to games more like a human voter but does it without biases like the name of team, the conference they play in, etc. It's mainly concerned with with wins, losses, strength of schedule (SOS) and head-to-head results. The core of my system is not something you see in most computers. It's not necessarily better - in purely mathematical terms, it's not as good - but the public relates very well to the system. Guru: So how do you respond to the accusation that your system is a 'one-man poll'? Billingsley: My response is that it's a 100 percent computer-generated formula, there's no personal input on a week-by-week basis. Anyone can duplicate the system if they have the program. Other than that I wrote the program, I have no influence. Guru: But you do have a preseason ranking. Isn't that inherently biased? Billingsley: In my system, you carry the rankings from one season to the next, exactly from last season's final rankings. You must have a starting position. Both Sagarin and Massey use that philosophy, the only difference is I found a way to do it without MOV and still have a pretty accurate system. Some people think before the season, everybody should start out equal, but my answer to that is it looks fair on the surface to start everyone equal, but it's not logical because we know for a fact that all teams are not equal, so how can we ignore that? It's more fair if Idaho starts out the same as Southern Cal. It's fair to Idaho. But is it fair to USC? In my mind, (to start out teams equally) skews any hopes of an unbiased SOS. That's why my SOS is dramatically different from what other computers are showing you. Guru: Does it bother you that your rankings get tossed at a disproportionally high rate every week? Billingsley: I certainly know my rankings give a different perspective. I know there are those out there like to say, 'look, Billingsley gets thrown out more than other computers, so it must not be any good.' My reponse is this: My computer program is more correlated to human votes than other computers, so of course it gets thrown out more often. But that's what the BCS was tying to accomplish. They don't want me to be like the other computers, they want a different perspective. It doesn't bother me that my rankings get thrown out more often, but it bothers me that people don't understand why, Guru: Do you get disgruntled fans writing you as a result? Billingsley: I do get hate mail from fans but you can't satisfy everybody. It's always from the fans of a few teams feeling that they're getting the shaft. If they love your ranking, they won't write, so 90 percent of the mail is bad. When this happened 10 years ago it broke my heart. But over the years, it just rolls off my back. I have always tried my best to respond to every email I get, but I finally have to post a message that says that I won't respond to anything with vulgarity. I've gotten some disgustingly vulgar email - those people really are not fans, they're just disgruntled human beings. You may not agree with me but at least you can be respectful. Guru: What would you do to change the current BCS formula? Billingsley: A couple of things. First, the weight between computers and humans should be 50-50. That's the way it was and then it changed because of an overreaction to that particular season (2003). In this formula, we're ignoring SOS - 33 percent is not enough weight to accurately describe SOS. The voters just don't have the capacity to gauge SOS the way they should, even if they had the inclination to do so. There is a difference, from a mathematically standpoint, between a 47th-ranked team and 48th but (the voters) cannot give us that distinction. That's why computers should have more weight. Second, they need to stop throwing out the high number (among the computers). They can throw out the low number but keep the high number. The reason I'm saying that is the six formulas we have now are narrowed down from hundreds of computer and it's the best we have, so we really shouldn't toss out two more in every ranking. For example, last season, I continuously had USC higher than other systems and at the end of the season, guess where they ended up? So my question is, why is it that the Trojans are not allowed to get the benefit of a system that gives them a higher rating? Guru: Aside from the formula, do you want to see the BCS changed in any way? Billingsley: I'm a fan of the BCS, but not because I'm a part of it. I think this system is the best way to bring together a No. 1 and No. 2 and keep the integrity of the season intact. I'm not a fan of a playoff. I prefer the way it is. In some seasons, the plus-one would make a lot of sense but it doesn't in every season. So whatever we do it's got to fit every season, that's what people forget. If you make a change based on one season's result, you're making a mistake. Any change that takes place should be made over a long period of investigation and research. We don't need a plus-one. If I have one criticism (for the BCS), it's that in the early years they made too many changes based on a knee-jerk reaction. But in their defense, they didn't have a choice. It was something we were all experiencing for the first time. The problem with Billingsly's ranking system, is that two teams could play identical schedules, have identical records, but the team that had the better season the previous year, will be ranked higher. He talks about trying to be fair, but this is about the most unfair thing I can think of. Billingsley system still allows for teams to manipulate their schedule to game themselves to the top of the rankings. The best way to evaluate teams is to see what they did against good teams (teams with a winning record-- 7 out of 12 with FCS schoolsnot counting). Good teams rarely lose to bad teams, so the best teams are those that win against the best competition. Looking back to 2002, while the Ohio State/Miami Fiesta bowl was good and all, USC and Georgia both played harder schedules in harder conferences were at least as good if not better than either Ohio State or Miami, with each schools winning more games against good teams in the regular season. Without a playoff, we are left with more mythical national champions. The only difference is we are told that Florida and OU were the best two teams in 2008 -- I disagree -- Texas, Utah, and USC were equally deserving and demonstrate why a playoff is necessary. "The core of my system is not something you see in most computers. It's not necessarily better - in purely mathematical terms, it's not as good - but the public relates very well to the system." I've always thought the entire purpose to the computer rankings was to be purely mathematical, unbiased, and who cares if the public relates well, like he even says elsewhere in the interview, people are happy when their team is ranked highly, and mad when they're not. The general public doesn't seem to be the best judge of these things. Personally I'd like to see everyone follow Colley's lead and actually explain what it is they're doing. This way we can judge for ourselves (or at least those of us who are mathematically inclined) whether the algorithms are worth their weight. Samuel Chi The Guru is a journalist who takes time from his busy schedule to provide this important public service. And of course, the Guru is so well-rounded that he has interests beyond the gridiron and crystal ball. Check out his other adventures -- after first buckle your seat belt.
{ "pile_set_name": "Pile-CC" }
Intestinal transport of irinotecan in Caco-2 cells and MDCK II cells overexpressing efflux transporters Pgp, cMOAT, and MRP1. Irinotecan (CPT-11) is a water-soluble camptothecin (CPT) derivative that has been recently approved in the United States for patients as a first-line therapy in advanced colorectal cancer. Phase I clinical trials using oral CPT-11 have shown poor and variable oral bioavailability. The present study was designed to investigate the intestinal absorption and efflux mechanisms of CPT-11 using in vitro cell culture models, Caco-2 cells, and engineered Madine-Darby canine kidney (MDCK) II cells overexpressing P-glycoprotein (Pgp), canalicular multispecific organic anion transporter (cMOAT), and multidrug resistance-associated protein (MRP1). The intestinal absorptive and secretory transport of CPT-11 was investigated using Caco-2 cell monolayers. Secretory transport was concentration-dependent and saturable. The secretory efflux permeability (P(eff)) of CPT-11 decreased with decreasing temperature, with an estimated activation energy of 19.6 +/- 2.9 kcal/mol suggesting the involvement of active transporters. The involvement of potential secretory transporters was further characterized in MDCK II cells. The secretory efflux carrier permeability (P(c)) was approximately 4- and approximately 2-fold greater in MDCK II/Pgp and MDCK II/cMOAT cells than that in MDCK II/wild-type cells. Furthermore, the secretory efflux P(eff) of CPT-11 was significantly decreased by Pgp inhibitors, elacridar (GF120918) (IC50 = 0.38 +/- 0.06 microM) and verapamil (IC(50) = 234 +/- 48 microM) in MDCK II/Pgp cells and by cMOAT inhibitor 3-([(3-(2-[7-chloro-2-quinolinyl]ethyl)phenyl]-[(3-dimethylamino-3-oxoprphyl)-thio)-methyl]-thio) propanoic acid (MK571) (IC50) = 469 +/- 60 micro;M) in MDCK II/cMOAT cells. Overall, the current study suggests that Pgp and cMOAT are capable of mediating the efflux of CPT-11 in vitro. Since both Pgp and cMOAT are expressed in the intestine, liver, and kidney, it is likely that these efflux transporters play a significant role limiting the oral absorption and disposition of this important anticancer drug.
{ "pile_set_name": "PubMed Abstracts" }
Type-II Symmetry-Protected Topological Dirac Semimetals. The recent proposal of the type-II Weyl semimetal state has attracted significant interest. In this Letter, we propose the concept of the three-dimensional type-II Dirac fermion and theoretically identify this new symmetry-protected topological state in the large family of transition-metal icosagenides, MA_{3} (M=V, Nb, Ta; A=Al, Ga, In). We show that the VAl_{3} family features a pair of strongly Lorentz-violating type-II Dirac nodes and that each Dirac node can be split into four type-II Weyl nodes with chiral charge ±1 via symmetry breaking. Furthermore, we predict that the Landau level spectrum arising from the type-II Dirac fermions in VAl_{3} is distinct from that of known Dirac or Weyl semimetals. We also demonstrate a topological phase transition from a type-II Dirac semimetal to a quadratic Weyl semimetal or a topological crystalline insulator via crystalline distortions.
{ "pile_set_name": "PubMed Abstracts" }
Introduction {#s1} ============ Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and the third leading cause of death from cancer. A variety of etiological and risk factors including hepatitis virus (HBV or HCV) infection, alcohol abuse and aflatoxin ingestion have been associated with hepatocarcinogenesis [@pone.0044206-Sanyal1], [@pone.0044206-Jemal1], [@pone.0044206-Pei1], [@pone.0044206-Wang1]. The development of HCC is a multi-step process from chronic hepatitis, to cirrhosis, to dysplastic nodules, and to malignant tumors with various genetic and epigenetic alterations [@pone.0044206-Pei1]. Although numerous studies have been devoted to delineate the molecular pathogenesis of HCC, the incidence and mortality of HCC has not been reduced over the past few decades. Surgery currently offers the only possibility of prolonged survival for HCC patients. Unfortunately, recurrence occurs in more than two-thirds of these patients despite initial curative intent and converts the situation to a dismal prognosis [@pone.0044206-Sanyal1], [@pone.0044206-Villanueva1]. It is presently a challenge to identify patients who are at high risk of early recurrence after undergoing potentially curative treatment for HCC. Various surrogate clinicopathologic features such as lymphovascular invasion, capsular invasion, satellite lesions, and tumour numbers are often used but with varying reliability reported [@pone.0044206-Wang1]. Additionally, most HCC are diagnosed at the advanced stages when there is no effective treatment, so there is an urgent need to develop novel therapeutic strategies for the treatment of HCC [@pone.0044206-Villanueva1]. MicroRNAs (miRNAs) are a class of highly conserved, small non-coding RNAs that play essential roles in the post-transcriptional regulation of gene expression through base pairing with the 3′-untranslated region (3′-UTR) of target mRNAs. Because miRNAs have been discovered to target a large proportion of mammalian genes, many studies have indicated that miRNAs play critical roles in the regulation of many biological functions and consequently, miRNAs play crucial roles in the development of many human diseases, including cancer [@pone.0044206-EsquelaKerscher1], [@pone.0044206-Babashah1]. The dsyregulation of miRNAs in HCC have been reported using miRNA expression profiling studies with several miRNAs reported as enhancers (miR-30d, miR-151, miR-210) or suppressors (miR-122, let-7g, miR-29b, miR-193b, miR-194, miR-139 and miR-124) of the metastatic process [@pone.0044206-Law1]. While the down-regulation of miR-214 in HCC have been reported [@pone.0044206-Gramantieri1], [@pone.0044206-Wong1], [@pone.0044206-Jiang1], [@pone.0044206-Li1], [@pone.0044206-Wang2], its molecular roles in recurrent HCC remain largely unknown. In this study, we have characterized CTNNB1 and EZH2 as two functional downstream targets of miR-214 and to decipher the possible roles of these downstream targets in early recurrent HCC disease. Materials and Methods {#s2} ===================== Tissue Specimens and Cell Cultures {#s2a} ---------------------------------- Cancerous and non-cancerous liver tissues were obtained from patients who underwent partial hepatectomy as curative treatment for HCC. All tumor tissues were divided into two portions and immediately snap-frozen in liquid nitrogen. Half of the sample was stored in liquid nitrogen until further use while the other portion was stained with hematoxylin and eosin and evaluated by an independent pathologist. All cancerous tissues studied were at least 70% cancerous. All tissue samples employed in this study were approved and provided by the Tissue Repository of the National Cancer Center Singapore, in accordance with the policies of its Ethics Committee. Written informed consent was obtained from all participating patients and all clinical and histopathological data provided to the researchers were rendered anonymous [@pone.0044206-Wang1]. The human HCC or hepatoma cell lines (HepG2, Hep3B, Huh-7, PLC/PRF/5, MHCC97-L, HCCLM3, MHCC97-H, SK-HEP-1, HLE, SNU-449 and SNU-475) were cultured in Dulbecco's modified Eagle's medium (DMEM) (Invitrogen, Carlsbad, CA) with 10% FBS and 100 units/mL of penicillin and 100 µg/mL of streptomycin (Invitrogen). RNA Extraction and Real-time Quantitative RT-PCR {#s2b} ------------------------------------------------ Total RNA from the HCC tissue samples or cell lines was extracted using TRIzol reagent (Invitrogen). The quality and quantity of isolated total RNA was assessed using the Agilent 2100 Bioanalyzer and NanoDrop ND-1000 Spectrophotometer (Agilent, Santa Clara, CA, USA). qRT-PCR was performed as described [@pone.0044206-Xia1], using primers listed in [Table S1](#pone.0044206.s006){ref-type="supplementary-material"}. For miRNA detection, the total RNA samples were polyadenylated and reversely transcribed for a two-step quantitative RT-PCR reaction using the NCode™ VILO™ miRNA cDNA Synthesis Kit and EXPRESS SYBR® GreenER™ miRNA qRT-PCR Kits (Invitrogen, CA) according to the manufacturer's instructions. The sequence-specific forward primers for mature hsa-miR-214 and U6 internal control were CAGGCACAGACAGGCAGT (18 bps, GC = 61.12%, Tm = 61.3) and 5′- CTCGCTTCGGCAGCACA-3′, respectively. For mRNA detection, the total RNA was reversely transcribed by using SuperScript III First-Strand Synthesis System for RT-PCR (Invitrogen, CA). The qPCR were performed by using SsoFast™ EvaGreen® Supermix (Bio-Rad). The *U6* or *HPRT1* was used as the internal control. The expression level of miR-214, EZH2, CTNNB1 or CDH1 was calculated using the expression ratios miR-214/U6, EZH2/HPRT1, CTNNB1/HPRT1 and CDH1/HPRT1 (i.e. 2^--ΔCt^) [@pone.0044206-Wong1], [@pone.0044206-Livak1]. Cell Viability Assay {#s2c} -------------------- A fragment containing human miR-214 was PCR-amplified from normal genomic DNA and subcloned into the pLL3.7 vector to get pLL3.7-Pre-miR-214 (P-miR-214) ([Figure S3](#pone.0044206.s003){ref-type="supplementary-material"}). Both HLE and SK-HEP-1 cells were either transfected with the pLL3.7-Pre-miR-214 (P-miR-214) or pLL3.7-control vector (P-miR-control) using the GenJet™ Plus DNA in vitro transfection reagent (SignaGen, MD). The cell viability was assessed by using MTS \[3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetra zolium\] assays. Cells were seeded into 96-well plates at a density of 5×10^3^ per well (100 µL) 24h post-transfection. For the MTS assay, the CellTiter 96 aqueous one solution cell proliferation assay kit (Promega, Madison, WI, USA) was used. Briefly, at each of the desired time points (24 h, 48 h and 72 h), 20 µL of the MTS reagent was added into each well and the cells were incubated at 37°C for an additional 2 h. The absorbance was detected at 490 nm using a Wallac Victor 1420 Multilabel plate reader (PerkinElmer, San Diego, CA). Each experiment was repeated three times. In vitro Matrigel Invasion Assay {#s2d} -------------------------------- Cell invasiveness was assessed using BioCoat Matrigel invasion chambers (BD Biosciences, Bedford, MA) according to the guidelines. Briefly, 800 µL of the cell culture medium with 10% FBS was added to the lower chamber as a chemoattractant. The transfected HLE and SK-HEP-1 cells were resuspended in 500 µL serum-free medium and seeded onto the rehydrated insert at 24 h after transfection. After another 24 h of incubation at 37°C, any non-invading cells on the upper surface of the Matrigel membrane were gently removed using a cotton-tipped swab. The invaded cells were then fixed with 100% methanol and stained with 1% toluidine blue. The stained invasive cells on the lower surface of the membrane were photographed under an inverted light microscope with a 40× objective and quantified by manual counting in three randomly selected areas. This experiment was performed in duplicate in three independent experiments. Establishment of Stable HCC Cell Lines {#s2e} -------------------------------------- One day before transfection, HLE and SK-HEP-1 cells were seeded onto 6-well plates at about 80% confluence. The cells were either transfected with pLL3.7-Pre-miR-214 (P-miR-214) or pLL3.7-miR-control (P-miR-control) vectors using the GenJet™ Plus DNA in vitro tranfection reagent (SignaGen, MD), according to manufacturer's instructions. After 48 h, the cells were subcultured to 10% confluence in a medium containing 1 µg/mL of puromycin (Sigma-Aldrich, St. Louis, MO). When all cells in the non-transfected control culture were killed, antibiotic-resistant clones were picked and passaged through the medium containing puromycin. The expression of miR-214 was confirmed by real-time qRT-PCR, as described above. Soft Agar Colony Assay {#s2f} ---------------------- The stably transfected cells was mixed with tissue culture medium containing 0.6% low-melting-point agarose (Sigma Sant Louise, MO), resulting in a final agar concentration of 0.3%. Then, 500 µL of the cell suspension (800 cells) was immediately plated in 24-well plates coated with 500 µL 0.6% agar in tissue culture medium and cultured at 37°C with 5% CO~2~. The plates were kept in the incubator and the number of colonies formed was counted under an inverted light microscope with a 40× objectives after 2--3 weeks. The assay was analyzed in duplicate in three independent experiments. Luciferase Reporter Assay {#s2g} ------------------------- The 3′-UTR sequence of EZH2 and CTNNB1 predicted to interact with miR-214 or a mutated sequence within the predicted target sites was synthesized and inserted into the XbaI and FseI sites of the pGL3 control vector (Promega, Madison, WI) ([Figure S3](#pone.0044206.s003){ref-type="supplementary-material"}). These constructs were called pGL3-EZH2-3′UTR-wt or pGL3- EZH2-3′UTR --mut, pGL3-CTNNB1-3′UTR-wt or pGL3-CTNNB1-3′UTR-mut, respectively. For the reporter assay, SK-HEP-1 cells were plated onto 24-well plates and transfected with the above constructs and P-miR-214 or P-miR-control vectors using the GenJet™ Plus DNA in vitro tranfection reagent (SignaGen, MD). A Renilla luciferase vector pRL-SV50 (Promega, Madison, WI) was also co-transfected in order to normalize the differences in transfection efficiency. After 48 h, the cells were harvested and assayed using the dual-luciferase reporter assay system (Promega, Madison, WI) according to the manufacturer's instructions. The experiment was performed in duplicate in three independent experiments. ![Downregulation of miR-214 is associated with the early recurrence of human HCC.\ The level of expression of miR-214 was analyzed by qRT-PCR and normalized to U6. (A) Expression of miR-214 in 20 paired of HCC tumor tissues was significantly lower compared to 20 matched histologically normal tissues as well as 10 histologically normal liver tissues from colorectal cancer patients with liver metastases (*P*\<0.01). (B) Low miR-214 expression was associated with early recurrent HCC disease when studied in an independent cohort of 50 HCC samples. The average expression level of miR-214, analyzed by qRT-PCR, was lower in HCC patients with early recurrence (≤2 years) (n = 29) than patients with no recurrence in the same time period (n = 21). (C) The expression of miR-214 was associated with survival in patients with HCC. The median expression value obtained for miR-214 of the 50 samples studied was employed as the cut-off point. Fisher's exact test and Kaplan-Meier analysis were used to demonstrate that high miR-214 expression was significantly associated with early recurrent disease and a relative poorer disease-free survival rate.](pone.0044206.g001){#pone-0044206-g001} Western Blotting {#s2h} ---------------- The protein concentrations were determined using the Bradford protein assay (Bio-Rad, Hercules, CA, USA). Antibodies for western blot: rabbit EZH2 (H-80) and rabbit beta-catenin (H-102) (Santa Cruz, CA); rabbit E-Cadherin (24E10) (Cell Signaling, MA); goat GAPDH (GenScript, NJ). Flow Cytometry {#s2i} -------------- The stable HLE cells was transfected with anti-miR-214 construct (miRZip-214; System Biosciences, Mountain View, CA) or pLVTHM-EZH2 [@pone.0044206-Lu1]. Transfected cells were resuspended in 100 µl staining buffer containing 10% FBS and put on ice for 20 min to block Fc receptors. After incubating with primary PerCP-Cy5.5-conjugated anti-human EpCAM antibodies or isotype control (BD Biosciences, Bedford, MA) for 1--2 h on ice in the dark, the cells were then washed with 1 ml ice-cold staining buffer for two times and centrifuged (400×g) at 4°C for 5 min. The collected cells were suspended in 500 µl staining buffer solution and were detected using FACSCanto II flow cytometer (BD Biosciences). All flow cytometry results were repeated three times. ![MiR-214 inhibits the invasion of HCC cells.\ (A) The expression of miR-214 in liver tumor cell lines was significantly lower than in the normal liver tissues (NN1 and NN2) (\* *P*\<0.05). (B and C) Re-expression of miR-214 following transfection with P-miR-214 inhibited the invasion of HLE (B) and SK-HEP-1 (C) cells. The upper panels in the figures showed images of transwell migration. The bar graph below the images indicated the mean number of invaded cells (± SD) counted under the microscope in three randomly selected fields (magnification ×40). \**P*\<0.05.](pone.0044206.g002){#pone-0044206-g002} Animal Studies {#s2j} -------------- All experiments on mice were approved by the SingHealth Institutional Animal Care and Use Committee (IACUC). The stably transfected SK-HEP-1 cells were resuspended in PBS and implanted into the right and left flanks (5×10^6^ cells per flank) of male BALB/c nude mice via subcutaneous injections. Tumor volumes were determined each week by measuring their length (a) and width (b) using a vernier caliper. The tumor volume (V) was calculated according to the formula V = ab^2^/2. The statistical significance between tumor sizes in the P-miR-214 and P-miR-control transfected groups was evaluated using the Student's *t* test. ![Re-expression of miR-214 significantly inhibits cell growth in vitro and tumorigenic properties of HCC cells.\ (A and B) Growth of HLE (A) and SK-HEP-1 (B) cells *in vitro* at different time points following the re-expression of miR-214 mediated by transfection with P-miR-214, \**P*\<0.05. (C) Stable expression of miR-214 inhibited the anchorage-independent growth of SK-HEP-1 cells in soft agar. The upper section shows images of colony formation. The bar graph below the figures showed the mean number of colonies formed (± SD) and counted under the microscope in three randomly selected fields (magnification, ×40). \**P*\<0.05. (D) Stable expression of miR-214 inhibited tumorigenicity of SK-HEP-1 cells. The upper section showed images of the tumors obtained in mice at the end of the eighth week. The bar graph indicated the average of overall tumor volume measured each week (n = 6 mice per group).](pone.0044206.g003){#pone-0044206-g003} Survival and Statistical Analysis {#s2k} --------------------------------- The experimental data are presented as the mean ± standard deviation (SD). All statistical analyses were performed using ANOVA or a two-tailed Student's *t* test (GraphPad Prism 5). Disease-free survival (DFS) was measured from the date of hepatic resection to the date of recurrence within 2 years or till the last follow up. The survival curves and univariate analysis were calculated using the Kaplan-Meier method and statistically compared using a log-rank test. Any factors that were significant at P\<0.05 in the univariate analysis were candidates for entry into a multivariate Cox proportional hazards model, the results of which are presented for both the first and last steps of the reverse selection of random variables. Differences were considered statistically significant when the P-values were less than 0.05. Results {#s3} ======= Downregulation of miR-214 is Associated with the Early Recurrence of Human HCC {#s3a} ------------------------------------------------------------------------------ The down-regulation of miR-214 has previously been observed in human HCC. In this study, we firstly validated the suppression of miR-214 in human HCC by qRT-PCR in paired tumor and non-tumor liver tissues from 20 HCC patients, as well as in 10 samples of histologically normal liver tissues from colorectal cancer patients with liver metastases ([Fig. 1A](#pone-0044206-g001){ref-type="fig"} and [Figure S1](#pone.0044206.s001){ref-type="supplementary-material"}). Further analysis of HCC tissues, using qRT-PCR, of an independent cohort of 50 HCC patients of whom 29 with early recurrent disease (≤2 years) and 21 with late recurrent disease demonstrated that miR-214 was significantly suppressed in samples of HCC patients with early recurrent disease, P\<0.01, ([Fig. 1B](#pone-0044206-g001){ref-type="fig"}). Using the median expression value obtained for miR-214 of the 50 samples studied as the cut-off point, Fisher's exact test and Kaplan-Meier analysis demonstrated that low miR-214 expression was significantly associated with early recurrent disease and a relative poorer disease-free survival rate ([Fig. 1C](#pone-0044206-g001){ref-type="fig"}). ![*EZH2* and *CTNNB1* are downstream targets of miR-214 and both are upregulated in human HCC tissue samples.\ (A) Effect of miR-214 on *EZH2* and *CTNNB1* expression, as shown by a luciferase reporter assay. The data were normalized by the ratio of Firefly and Renilla luciferase activities measured at 48 h post-transfection. The bar graph showed the mean ± SD in three independent transfection experiments. \**P*\<0.05. (B) Western blotting analysis of EZH2, β-catenin, and E-cadherin expression in P-miR-control- and P-miR-214-transfected SK-HEP-1 cells. (C-E) Validation of the expression of *EZH2* (C), *CTNNB1* (D) and CDH1 (E) in 20 paired human HCC tissue samples and 10 samples of histologically normal liver tissues were validated by qRT-PCR.](pone.0044206.g004){#pone-0044206-g004} Re-expression of miR-214 Significantly Inhibits Cell Growth and Invasion in vitro and Tumorigenic Properties in vivo of HCC Cells {#s3b} --------------------------------------------------------------------------------------------------------------------------------- Next, we studied the expression of miR-214 in a panel of human liver cancer cell lines including HepG2, Hep3B, Huh-7, PLC/PRF/5, MHCC97-L, HCCLM3, MHCC97-H, SK-HEP-1, HLE, SNU-449 and SNU-475. Consistent with the data obtained from human HCC tissue samples, the down-regulation of miR-214 in these human liver cancer cell lines was observed ([Fig. 2A](#pone-0044206-g002){ref-type="fig"}). Interestingly, the expression of miR-214 was apparently more down-regulated in cell lines possess higher reported capability of invasion and metastasis (such as HLE and SK-HEP-1) than cell lines with lower invasion and metastasis capability (such as HepG2 and PLC/PRF/5) [@pone.0044206-Fuchs1] ([Fig. 2A](#pone-0044206-g002){ref-type="fig"}). So we transfected HLE and SK-HEP-1 cells, which express miR-214 at a low level, with the pLL3.7-Pre-miR-214 (P-miR-214) or pLL3.7-miR-control vector (P-miR-control) in order to overexpress miR-214. The transfection efficiency and overexpression effects of miR-214 were shown in [Figure S2](#pone.0044206.s002){ref-type="supplementary-material"}. Re-expression of miR-214 in HLE and SK-HEP-1 cells expressing low basal level of miR-214 significantly reduced the invasiveness and motility ability of HLE and SK-HEP-1 cells compared to transfection with the P-miR-control ([Fig. 2B and 2C](#pone-0044206-g002){ref-type="fig"}). ![Roles of *EZH2*, *CTNNB1* and *CDH1* on the growth and invasion of HCC cells.\ (A) Silencing of *EZH2* significantly inhibited the growth and significant decreased the ability of SK-HEP-1 cells to invade. (i) Western blots showing the silencing of *EZH2* by pLVTHM-shEZH2. (ii) The effect of silencing *EZH2* on cell growth at different time points. (iii) The inhibitory effect of silencing *EZH2* on cell invasion. (B) Silencing of *CTNNB1* significantly inhibited the growth of SK-HEP-1 cells. (i) Western blots showing the reduction of β-catenin after transfection with shRNA-β-catenin. (ii) Effects of silencing β-catenin on cell growth at different time points. (C) Over-expression *CDH1* significantly inhibited the ability of SK-HEP-1 cells to invade. (i) Western blots showing the overexpression of CDH1 by pcDNA3.1-CDH1 plasmid transfection. (ii) The inhibitory effects of overexpressing *CDH1* on SK-HEP-1 cell invasion. (D) Western blots showing the silencing of EZH2 significantly decreased the expression of CTNNB1 and induced CDH1 expression.](pone.0044206.g005){#pone-0044206-g005} Subsequently, functional overexpression of miR-214 in HLE and SK-HEP-1 cells also significantly inhibited cell proliferation according to the MTS-based CellTiter 96 cell proliferation assay ([Fig. 3A and 3B](#pone-0044206-g003){ref-type="fig"}) and colony formation ([Fig. 3C](#pone-0044206-g003){ref-type="fig"}). The MTS assay showed that overexpression of miR-214 significantly inhibited the growth of HLE and SK-HEP-1 cells at 72 h post-transfection, while P-miR-control had no effect ([Fig. 3A and 3B](#pone-0044206-g003){ref-type="fig"}). Moreover, stably overexpressing miR-214 in SK-HEP-1 cells resulted in the significant reduction in soft agar colony formation ([Fig. 3C](#pone-0044206-g003){ref-type="fig"}) indicating the overexpression of miR-214 inhibited cell anchorage-independent growth of SK-HEP-1 cells in soft agar. Furthermore, SK-HEP-1 cells stably overexpressing miR-214 following transfection with P-miR-214 (SK-miR-214) showed reduced tumorigenesis in nude mice. The mean volume of the tumors generated from SK-miR-214 cells at 8 weeks post-injection was significantly less compared to mice injected with SK-miR-control cells ([Fig. 3D](#pone-0044206-g003){ref-type="fig"}). These data suggested that the overexpression of miR-214 in human HCC cells can significantly inhibit their growth and invasion *in vitro* and tumorigenicity *in vivo*. ![Further silencing of miR-214 in HLE cells with miRZip-214 increases EpCAM^+^ stem-like cell population by activating the β-catenin pathway.\ (A) Further silencing of miR-214 in HLE cells with miRZip-214. (B) Silencing miR-214 following transfection with miRZip-214 increased EZH2 and β-catenin expression. (C) Overexpression of *EZH2* with pLVTHM-EZH2 activated β-catenin expression. (D) Flow cytometric analysis of EpCAM^+^ stem-like HLE cells following the silencing of miR-214 (i) and overexpression of *EZH2* (ii).](pone.0044206.g006){#pone-0044206-g006} miR-214 can Regulate the Canonical wnt Signaling Pathway in HCC by Targeting CTNNB1, its Key Downstream Component {#s3c} ----------------------------------------------------------------------------------------------------------------- In order to elucidate the molecular roles of miR-214 in HCC, we have identified its potential molecular targets using computational algorithms. With an integrated target prediction tool (miRecords), *CTNNB1* was predicted to be a potential target of miR-214 by miRanda, PITA, RNAhybrid and TargetScan. Previous study has shown that the polycomb protein EZH2 was regulated by miR-214 in skeletal muscle and embryonic stem cells [@pone.0044206-Juan1]. In addition, *EZH2* was predicted to be a potential target of miR-214 by RNAhybrid [@pone.0044206-Xiao1]. The predicted target sequences of miR-214 and EZH2-3′UTR or CTNNB1-3′UTR was analyzed by RNAhybrid 2.2 or TargetScan and these sequences are shown in [Figure S3](#pone.0044206.s003){ref-type="supplementary-material"}. ![Expression of *EZH2, CTNNB1,* and *CDH1*, downstream targets of miR-214, is associated with early recurrent disease and survival of patients with HCC.\ The level of expression of *EZH2*, *CTNNB1,* and *CDH1* in the 50 HCC samples described in [Figure 1](#pone-0044206-g001){ref-type="fig"} was analyzed by qRT-PCR and normalized by *HPRT1*. (A--C) The average expression level of *EZH2* and *CTNNB1* was significantly higher in samples from HCC patients with early recurrence (≤2 years) (n = 29) than patients with no recurrence over the same period (n = 21). In comparison, the average expression level of *CDH1* was significantly lower in in samples from HCC patients with early recurrence (C). (D--F) The median expression value obtained for *EZH2*, *CTNNB1*, and *CDH1* of the 50 samples studied was employed as the cut-off point and employed independently in Kaplan-Meier analysis to study disease-free survival rate. (G) Combining the expression of *EZH2*, *CTNNB1* and *CDH1* to predict tumor recurrence and disease-free survival rate using Kaplan-Meier analysis.](pone.0044206.g007){#pone-0044206-g007} To validate *EZH2* and *CTNNB1* were indeed direct targets of miR-214, we tested the ability of miR-214 to recognize the 3′-UTR of *EZH2* and *CTNNB1* mRNA using a dual-luciferase reporter assay. The 3′-UTR sequence of *EZH2* and *CTNNB1* predicted to interact with miR-214 were synthesized and inserted into the XbaI and FseI sites of the pGL3 control vector (Promega, Madison, WI). Mutations were also been made into the predicted interacting sequences ([Figure S3](#pone.0044206.s003){ref-type="supplementary-material"}). These constructs were designated as pGL3-EZH2-3′UTR-wt, pGL3- EZH2-3′UTR-mut, pGL3-CTNNB1-3′UTR-wt and pGL3-CTNNB1-3′UTR-mut. Co-transfection of P-miR-214 with pGL3-EZH2-3′UTR-wt or pGL3-CTNNB1-3′UTR-wt in SK-HEP-1 cells significantly suppressed luciferase activity ([Fig. 4A](#pone-0044206-g004){ref-type="fig"}). The suppressive effect of P-miR-214 was abrogated with the pGL3-EZH2-3′UTR-mut and pGL3-CTNNB1-3′UTR-mut constructs which contained mutation introduced in the predicted miRNA-target interactions sequences, confirming that *EZH2* and *CTNNB1* were indeed direct downstream functional targets of miR-214 ([Fig. 4A](#pone-0044206-g004){ref-type="fig"}). Overexpression of miR-214 also inhibited the protein expression of EZH2 and CTNNB1 ([Fig. 4B](#pone-0044206-g004){ref-type="fig"}). Moreover, expression of E-cadherin, which often complexes with *CTNNB1*, was induced following the overexpression of miR-214 ([Fig. 4B](#pone-0044206-g004){ref-type="fig"}). To corroborate the role of *EZH2* and *CTNNB1* in miR-214-mediated tumor cell invasion, we rescued the expression of *EZH2* and *CTNNB1* in miR-214 stable transfected HLE cells by transfecting a plasmid carrying wild-type *EZH2* (pLVTHM-EZH2) [@pone.0044206-Lu1] or *CTNNB1* (pCI-CTNNB1, Addgene) [@pone.0044206-Morin1]. The re-expression of *EZH2* and *CTNNB1* was validated by qRT-PCR ([Figure S4](#pone.0044206.s004){ref-type="supplementary-material"}). The cell invasion was partially rescued in miR-214 stable transfected HLE cells by reexpression of *EZH2* or *CTNNB1* ([Figure S4](#pone.0044206.s004){ref-type="supplementary-material"}). These data indicate that miR-214 inhibits cell invasion by inhibiting *EZH2* and *CTNNB1*. Corroborating with these results, expression of *EZH2*, *CTNNB1* and *CDH1* in a cohort of HCC patients studied in our laboratory previously by global gene expression profiling [@pone.0044206-Wang1] also indicated that expression of both *EZH2* and *CTNNB1* are significantly up-regulated while *CDH1* is down-regulated in human HCC compared to adjacent histologically normal liver tissues ([Figure S5](#pone.0044206.s005){ref-type="supplementary-material"}). These results were further validated with the 20 paired human HCC tissue samples and the 10 samples of normal liver tissues by qRT-PCR described earlier ([Fig. 4C--E](#pone-0044206-g004){ref-type="fig"}). Silencing EZH2 or CTNNB1 or Functional Overexpressed CDH1 Suppressed HCC Cell Growth and Invasion {#s3d} ------------------------------------------------------------------------------------------------- To investigate the roles of *EZH2*, *CTNNB1* and *CDH1* in HCC, we knocked-down the expression of *EZH2* and *CTNNB1* with shRNA and over-expressed *CDH1* with pcDNA3.1-CDH1 expression vector in SK-HEP-1 cells as described previously [@pone.0044206-Lu1], [@pone.0044206-Onder1], [@pone.0044206-Miranda1]. Silencing of *EZH2* and *CTNNB1* were confirmed by western blot ([Fig. 5A (i) and 5B (i)](#pone-0044206-g005){ref-type="fig"}). Silencing of *EZH2* significantly inhibited the growth and significant decreased the ability of SK-HEP-1 cells to invade ([Fig. 5A (ii) and (iii)](#pone-0044206-g005){ref-type="fig"}) while the knockdown of *CTNNB1* only significantly inhibited the growth of SK-HEP-1 cells ([Fig. 5B (ii)](#pone-0044206-g005){ref-type="fig"}). The over-expression of *CDH1* was confirmed by western blot ([Fig. 5C (i)](#pone-0044206-g005){ref-type="fig"}) and the over-expression of *CDH1* in SK-HEP-1 cells significantly inhibited their ability to invade ([Fig. 5C (ii)](#pone-0044206-g005){ref-type="fig"}). Furthermore, silencing of *EZH2* significantly decreased the expression of *CTNNB1* and induced *CDH1* expression ([Fig. 5D](#pone-0044206-g005){ref-type="fig"}). This observation is consistent with previous reports demonstrating that *EZH2* regulated the expression of *CTNNB1* and *CDH1* [@pone.0044206-Cao1], [@pone.0044206-Cheng1]. Silencing of miR-214 Increase EpCAM^+^ Stem-like Cell Population by Activating β-catenin Pathway {#s3e} ------------------------------------------------------------------------------------------------ Previous studies have implicated that epithelial cell adhesion molecule (EpCAM) is an biomarker of HCC tumor-initiating cells with stem/progenitor cell features and EpCAM^+^ HCC cells were correlated with tumor progression and invasiveness [@pone.0044206-Yamashita1]. Since EpCAM is a direct transcriptional target of the wnt-β-catenin canonical signaling pathway [@pone.0044206-Ji1], [@pone.0044206-Oishi1], we have demonstrated that mR-214 can directly or indirectly modulate the expression of *CTNNB1* through *EZH2*, we decided to investigate the effect of silencing miR-214 on the EpCAM^+^ HCC tumor cell population. For this study, we firstly employed the construct miRZip-214 (System Biosciences) to knockdown miR-214 expression in miR-214-stable transfected HLE cells. The ability to knockdown of miR-214 expression by miRZip-214 construct was verified by qRT-PCR ([Fig. 6A](#pone-0044206-g006){ref-type="fig"}). Knocked-down of miR-214 by miRZip-214 specifically increased the expression of *EZH2* and β-catenin compared with miRZip-control transfection ([Fig. 6B](#pone-0044206-g006){ref-type="fig"}). Next, we overexpressed *EZH2* by transfecting the vector pLVTHM-EZH2 [@pone.0044206-Lu1] to miR-214-stable transfected HLE cells. The overexpression of *EZH2* by pLVTHM-EZH2 also activated β-catenin while pLVTHM-control had no effect ([Fig. 6C](#pone-0044206-g006){ref-type="fig"}). Subsequent flow cytometric analysis showed that knocked-down of miR-214 or functional overexpressing *EZH2* induced an enrichment of EpCAM^+^ HLE cells ([Fig. 6D (i) and (ii)](#pone-0044206-g006){ref-type="fig"} respectively). These results suggest that miR-214 can modulate EpCAM^+^ stem-like cells by activating the β-catenin pathway in HCC cells. Expression of CTNNB1, EZH2 and CDH1 Associated with HCC Recurrence {#s3f} ------------------------------------------------------------------ We have previously established a gene expression profiling dataset on 50 HCC patients using Affymetrix Gene chips [@pone.0044206-Wang1]. Of the 50 HCC patients studied, 29 gave early recurrent disease (\<2 years) and 21 no recurrent disease developed till the last follow-up (non-recurrence). In the [Fig. 1B](#pone-0044206-g001){ref-type="fig"}, we demonstrated that miR-214 was significantly suppressed in samples of HCC patients with early recurrent disease, and high miR-214 expression was significantly associated with early recurrent disease and a relative poorer disease-free survival rate ([Fig. 1C](#pone-0044206-g001){ref-type="fig"}). To study the potential prognostic significance of *CTNNB1*, *EZH2*, the direct targets of miR-214 and *CDH1,* a downstream target in HCC recurrence, we further analyzed the expression level of *EZH2*, *CTNNB1*, and *CDH1* in this dataset. Besides miR-214 expression, it was observed that the upregulation of *EZH2* and *CTNNB1* and the down-regulation of *CDH1* significantly associated with early recurrent HCC disease and poor survival. Multivariate Cox regression analysis indicated that tumor recurrence, low-level of miR-214 and *CDH1*, and high-level of *EZH2* and *CTNNB1* were prognostic factors for early HCC recurrence and poor survival ([Table S2](#pone.0044206.s007){ref-type="supplementary-material"}). The expression of *CDH1* in HCC samples of patients with early recurrence was significantly lower while the expression of *EZH2* and *CTNNB1* was significantly higher in patients with early recurrence ([Fig. 7A--7C](#pone-0044206-g007){ref-type="fig"}). When the corresponding median value of the 50 samples studied was chosen as the cut-off point for high and low expression, Fisher's exact test and Kaplan-Meier analysis showed that low-level *CDH1,* high *EZH2* and high *CTNNB1* expression was significantly associated with poor disease-free survival ([Fig. 7D--7F](#pone-0044206-g007){ref-type="fig"}). Combining *EZH2*, *CTNNB1* and *CDH1* improved synergistically the association with disease-free survival ([Fig. 7G](#pone-0044206-g007){ref-type="fig"}). These data suggest that the expression of miR-214 and its downstream targets are clinically useful indicators correlated with early recurrent HCC disease and survival. Discussion {#s4} ========== The deregulation of miRNAs has been implicated in various human cancers including human hepatocellular carcinoma. Differential miRNA expression in tumor samples compared to normal samples or between groups of tumor samples with a favourable and poor clinical outcome have been used to generate miRNA signatures with potential prognostic and/or predictive value. Nevertheless, elucidating the molecular roles of aberrant miRNA expression in human HCC remains largely unexplored. In the current study, we have demonstrated the down-regulation of miR-214 is associated with the invasion and early recurrence of HCC. This finding extends earlier reports demonstrating the frequent down-regulation of miR-214 expression in human HCC tissues and cell lines [@pone.0044206-Gramantieri1], [@pone.0044206-Wong1], [@pone.0044206-Jiang1], [@pone.0044206-Li1], [@pone.0044206-Wang2]. More recently, ER stress has also been shown to negatively modulate the expression of the miR-199a/214 cluster to regulates tumor survival and progression in HCC [@pone.0044206-Duan1]. Besides HCC, miR-214 has also been shown to be deregulated in human ovarian, cervical, and breast cancers [@pone.0044206-Yang1], [@pone.0044206-Qiang1], [@pone.0044206-Yang2], [@pone.0044206-Derfoul1]. In ovarian cancer, miR-214 was shown to induce cell survival and cisplatin resistance by targeting the 3′-untranslated region (UTR) of *PTEN* to suppress its expression and resulting in the activation of the PI3K/Akt signaling pathway [@pone.0044206-Yang1]. In cervical cancer, the ectopic expression of miR-214 could inhibit the proliferation, migration and invasive ability of HeLa cells by targeting *MEK3*, *JNK1* and Plexin-B1 [@pone.0044206-Qiang1], [@pone.0044206-Yang2]. Studies have also reported that miR-214 contributes to the progression and metastasis of melanoma through the suppression of *TFAP2C* [@pone.0044206-BarEli1], [@pone.0044206-Penna1]. These examples illustrate the importance of the proper execution of miR-214 for maintenance of cellular homeostasis and the optimal performance of cellular processes and miR-214 expression is often perturbed in human cancer. The aberrant expression of miRNAs can be related to the metastatic capability of tumors, offer prognostic value as well as the identification of regulatory signaling pathways. Furthermore, miRNAs can coordinately modulate the expression of hundreds of target genes, mainly by negatively affecting mRNA stability and/or protein output. With this mode of gene expression control, a single miRNA can concomitantly influence multiple cellular programs under physiological and pathological conditions [@pone.0044206-Baek1], [@pone.0044206-Bentwich1], [@pone.0044206-Lim1], [@pone.0044206-Selbach1]. Therefore, the prediction and identification of target genes is an important step toward deciphering the molecular roles of misregulated miRNAs. In this study, we have demonstrated that *CTNNB1* and *EZH2* are direct targets of miR-214. Since *CTNNB1* is also a downstream target of *EZH2*, hence *CTNNB1* can be directly or indirectly regulated though *EZH2* to modulate the β-catenin signalling pathway. We demonstrated that both *EZH2*, and *CTNNB1* were overexpressed in HCC patients with early recurrent disease. The regulation of the expression of the polycomb protein *EZH2* by miR-214 was firstly observed in skeletal muscle and embryonic stem cells (29). Subsequently, the overexpression of *EZH2* has been reported in several types of cancer including prostate, breast, bladder, gastric, lung, and HCC [@pone.0044206-Chase1]. Most recently and in agreement with our present results, it has been reported that the reduction of miR-214 expression in breast cancer cells associated with increase in cell proliferation, invasion, and accumulation of the polycomb EZH2 methyltransferase [@pone.0044206-Derfoul1] and that EZH2 protein expression can be a sensitivity diagnostic biomarker for HCC [@pone.0044206-Cai1]. In the present study, we have validated these observations and further demonstrated that suppression of miR-214 expression in HCC can modulate the β-catenin signalling pathway by activating *CTNNB1* and *EZH2* and down-regulating *CDH1* expression. Additionally, our clinical data showed that low-level of miR-214 and *CDH1* and high-level of *EZH2* and *CTNNB1* were significantly associated with early recurrent HCC disease and can be predictors of comparatively reduction in disease-free survival ([Fig. 7](#pone-0044206-g007){ref-type="fig"}). Despite several reports have implicated the aberrant activation and mutation of *CTNNB1*, to the best of our knowledge, there has been no report to suggest that *CTNNB1* to be a direct target of miR-214 in HCC [@pone.0044206-Zeng1], [@pone.0044206-Dahmani1]. In this study, we demonstrated the expression of *CTNNB1* was significantly up-regulated in HCC tumors compared to adjacent histologically normal liver tissues. The silencing of *CTNNB1* expression significantly inhibits HCC cell growth and induced the expression of E-cadherin. Previous studies have established that cadherin adhesion is necessary for cell-cell junctional complex assembly [@pone.0044206-Watabe1], [@pone.0044206-Adams1]. Cadherins associate with a growing number of membrane cytoskeletal proteins termed the cadherin/catenin complex [@pone.0044206-Yap1]. Catenins are the best-characterized cadherin binding proteins, and their binding is required for cadherin function. Loss of E-cadherin-beta-catenin adhesion has been shown to be associated with the progression of many epithelial malignancies and it is linked to tumor metastasis and poor clinical prognosis [@pone.0044206-Cao1]. Herein, we also showed that the expression of E-cadherin (*CDH1*) is significantly down-regulated in human HCC tissues and the functional overexpression of E-cadherin in HCC cells significantly inhibited cell invasion. Moreover, the expression of *CTNNB1* and *CDH1* were significantly associated with early tumor recurrence. Cancer stem cells (CSCs) are cells within a tumor that possess the capacity to self-renew and maintain tumor-initiating capacity through differentiation into the heterogeneous lineages of cancer cells that comprise the whole tumor [@pone.0044206-Jordan1], [@pone.0044206-Visvader1]. Recent evidence suggests that epithelial cancers, including HCC are driven by a small sub-population of CSCs. For HCC, it has been reported that EpCAM can be a marker for the putative hepatic CSCs [@pone.0044206-Yamashita1]. One of the characteristics of CSCs is their ability to form floating spheroids under anchorage-independent conditions in a serum-free defined media. Colonospheres formed in vitro exhibited higher expression of colon CSCs markers including EpCAM, and also exhibited the ability to form spheroids under extreme limiting dilution, indicating the predominance of CSCs in colonospheres [@pone.0044206-Kanwar1]. Additionally, colonospheres showed reduced membrane bound β-catenin and the down-regulation of phosphorylated β-catenin. Since miR-214 can regulate the β-catenin pathway directly or indirectly via *EZH2* to target β-catenin [@pone.0044206-Kanwar1], we have investigated the potential role of miR-214 in regulating hepatic EpCAM^+^ CSCs. It can be demonstrated by flow cytometric analysis that silencing of miR-214 or functional overexpressing *EZH2* induced an enrichment of EpCAM^+^ HCC cells and suggesting that miR-214 can modulate EpCAM^+^ stem-like cells by activating the β-catenin pathway. In summary, our data showed that miR-214 can provide tumor suppression function in hepatocarcinogenesis through modulating the expression of *EZH2*, *CTNNB1* and *CDH1*. The reduction in the expression level of miR-214 during hepatocarcinogenesis resulted in the up-regulation of EZH2 and β-catenin and the down-regulation of E-cadherin. The ectopic expression of miR-214 in HCC cell lines suppressed cell growth, invasion and stem-like traits *in vitro* and tumorigenicity *in vivo* through the inhibition of β-catenin signaling. Hence, restoration the expression of miR-214 in HCC can be explored as an alternative therapeutic strategy against HCC. Supporting Information {#s5} ====================== ###### **Expression of miR-214 in 20 paired of HCC tumor tissues was significantly lower compared to 20 matched histologically normal tissues (** ***P*** **\<0.01).** The 2^--ΔΔCt^ of the values was calculated by normalization to the values obtained with the "matched normal" tissues as the "reference". (TIF) ###### Click here for additional data file. ###### **Expression of miR-214 in transfected or stable cells.** (A, C) Images obtained after P-miR-control and P-miR-214 transfection in HLE and SK-HEP-1 cells. (B, D) Relative level of expression of miR-214 after transfection with P-miR-214 in HLE and SK-HEP-1 cells 48 h after transfection. (E) Images of SK-HEP-1-miR-control and SK-HEP-1-miR-214 stable cells. (F) Relative level of expression of miR-214 in SK-HEP-1-miR-control and SK-HEP-1-miR-214 stable cells. (TIF) ###### Click here for additional data file. ###### **(A) The modified pLL3.7 plasmid structure and the insert sequence of hsa-miR-214.** (B) The map of pGL3 control vector. The 3′-UTR sequence or a mutated sequence were synthesized and inserted into the XbaI and FseI sites of the pGL3 control vector. (C,D) Rnahybrid analysis of miR-214 and EZH2-3′UTR or CTNNB1--3′UTR by RNAhybrid 2.2 predicted target sequences of miR-214 within the 3′-UTR of EZH2 (C) or CTNNB1 mRNA (D). (TIF) ###### Click here for additional data file. ###### **Re-expression of EZH2 and CTNNB1 rescues miR-214-related functions.** (A,B) The re-expression of EZH2 or CTNNB1 in miR-214 stable transfected HLE cells was validated by qRT-PCR. (C,D) The cell invasion was partially rescued in miR-214 stable transfected HLE cells by re-expression of EZH2 or CTNNB1. (TIF) ###### Click here for additional data file. ###### **Expression of EZH2, CTNNB1 and CDH1 in human HCC samples.** (A--C) The expression of EZH2, CTNNB1 and CDH1 was from a previously published HCC microarray database and presented by dot plot analysis. The microarray data for the HCC tumors, matched normal and histologically normal liver tissues have been previously deposited in the European Bioinformatics Institutes of the European Molecular Biology Laboratory database (<http://www.ebi.ac.uk/arrayexpress/>) and are accessible through ArrayExpress public database with accession numbers E-MEXP-84. EZH2 (A) and CTNNB1 (B) were significantly up-regulated while CDH1 (C) was down-regulated in human HCC tissue samples. (TIF) ###### Click here for additional data file. ###### **Primers for qRT-PCR analysis.** (DOCX) ###### Click here for additional data file. ###### **Univariate and multivariate disease-free survival analysis for the 50-samples real-time PCR dataset, based on known clinical parameters.** (DOCX) ###### Click here for additional data file. We thank the NCC Tissue Repository for providing the tissue specimens for this study, and Prof. S.S. Jiang from Sun Yat-Sen University for providing the pLL3.7-miR-214 and pLL3.7-miR-control vectors and Prof. M.L. He from the Chinese University of Hong Kong for providing pLVTHM-EZH2, pLVTHM-shEZH2 and the control vectors. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: KMH. Performed the experiments: HPX. Analyzed the data: KMH HPX. Contributed reagents/materials/analysis tools: LLPJO HPX KMH. Wrote the paper: HPX KMH. NA.
{ "pile_set_name": "PubMed Central" }
ComicsAlliance Reviews ‘Batman’ (1966), Part Two Each week, Chris Sims and David Uzumeri take a look back at one of the most successful and influential comic book movie franchises of all time, in ComicsAlliance’s in-depth retrospective on the Batman films. David: Welcome back to Cinematic Batmanology for the second and final installment of our look at 1966’s famous, amazing high-camp Batman movie! Catwoman, in the disguise of Miss Kitka, baited Bruce Wayne to be captured by the disastrous foursome of herself, Joker, Penguin and Ridder, for the purpose of baiting Batman — who they do not know is Bruce Wayne — into falling into their trap, allowing them to take over the United Nations analogue and therefore the world! Chris: Truly, this is the greatest story of a generation. David: Kick us off, sir! Chris: The second half of the movie picks up where the classic cliffhanger of Bruce Wayne being kidnapped left off, with a newspaper informing us that Bruce and Kitka were “Seized In Brazen Snatch,” which, despite what you may think, is not a truly amazing bit of filth on the part of the writers. I don’t think it is, anyway. David: Adam West is truly amazing in this scene, getting his mack on to an extreme extent. This is likely because he really, really wants to f*** Lee Meriwether. Chris: Bruce’s reactions do seem to be pretty extreme: He wakes up in the United Underworld headquarters and — because in grand DC Comics tradition, even the World’s Greatest Detective can’t recognize a woman once she puts on a domino mask — demands to know what they’ve done with Kitka. Then, he tells him that if they’ve hurt her, HE WILL KILL THEM. Chris: It’s not even an implication. He literally says “If you’ve harmed that girl, I’ll kill you all.” Keep in mind that he has known this girl for exactly one day. David: Yeah, at this point Bruce Wayne is seriously pissed off at them with taking Kitka, even though he kind of intentionally put her in danger of this exact situation. Chris: All of the Joker’s machinations in The Dark Knight couldn’t get Batman to break his moral code. In Batman ’66, Lee Meriwether does it in 24 hours, more or less by accident, just by being Lee Meriwether. Can you imagine if it was Julie Newmar? He would’ve started murdering puppies with his bare hands. David: He would have actually killed Romero, Meredith and Gorshin. Like, Adam West would have killed them on the set. Chris: It’s also worth noting that the villains are getting seriously annoyed by the fact that Batman hasn’t shown up to rescue Bruce Wayne and get launched into the arms of an exploding octopus, too, but they never manage to put these two facts together. Batman demands to see Kitka, and Catwoman — who at this point is acting as the leader of the group, even though this is the Riddler’s plan — tells them to blindfold him and lead him down “the labyrinthine path” to where they’ve stashed her. I loved this when I was a kid, the fact that the villains were running this huge con to make Batman think they were in this crazy lair, when they’re really just hanging out in a studio apartment over a dive bar. David: I’m pretty dumb, because I’ve honestly been thinking that this room was in the submarine, even though we saw it earlier above the bar. The movie totally fooled me. Which makes me feel REALLY dumb, since now I understand why they put Schmidlapp in the room with the fake sea. I always wondered why they didn’t have him in the real sea, because they were in a submarine. Readers, you will all want to have found me at NYCC and kicked me in the dick for this, because I am a legitimate idiot. Except that NYCC’s nonexistent cellphone coverage would have made me invisible. HA! Chris: To be fair, I’ve been telling you that for a year now. Catwoman uses the time they spend marching Wayne around in circles to change to Kitka and tie herself up in yet another scene that had to have had a pretty strong impact on the fuzzy handcuff industry. David: This, again, is where West really shines. He’s got so much conviction every time he refers to the villains in that angry tone. And I also have to give him credit for never laughing at Meriwether’s ridiculous Russian accent. Chris: There’s only one scene where I’ve seen a Batman ’66 cast member come close to breaking, and it happens in my favorite episode ever: the three-parter from Season 3 where Batman, Robin and Batgirl head across the pond to Londinium. There’s a part where Batgirl’s chained up (again with the bondage!) and Batman is freeing her by going at her chains with this gigantic file, and he does it in what I can only describe as an extremely sexual way. Like, he’s just hammering away at the chains, and he even gives Yvonne Craig this amazing nostril-flaring sex face, and she almost cracks up. David: In the comments, someone claimed that West and Ward actually spent the infamous Siamese Human Knot scene whispering obscenities to Craig, which kind of kills the humor of that for me if it’s true. Chris: Here, Wayne apologizes to Kitka for getting her into this mess — unnecessary since it was her who was getting the death threats, but pretty gentlemanly nonetheless — and then reveals that he has a secret weapon: A radio transmitter strapped to his arm that he’s completely forgotten about until this very moment! As he says in one of my favorite lines of the film, “Capitalists like myself who carry large sums of money often have such contrivances.” David: You see, it’s lines like that that make me question the sanity of anyone who calls this movie dumb. This movie is not only astonishingly self-aware, it’s incredibly funny about it. Nobody can hear that line and not imagine the writers’ room cracking up when it was read out loud. The movie is genuinely very clever. Chris: You might be an idiot, Uzi, but your statement about Batman being a big kid was pretty insightful: Batman talks like a kid imagines a really smart guy talks, the same way that a kid talks when he’s pretending to be a grown-up, dropping five-dollar words and telling girls to “wiggle around back to back” to get at their radio transmitters. David: This is the 1960s equivalent of nerds on the Internet reading pick-up artist websites and books. Batman totally loves Neil Strauss. Chris: The rest of the United Underworld rushes in to get Wayne before he can get to his transmitter, but it’s all revealed to be a pretty clever ruse: As soon as they untie Millionaire Playboy Bruce Wayne, he goes completely sickhouse on the bad guys. This is not a dude pretending not to be Batman. David: To be fair, this is also a dude in a movie where he could just claim “oh, Batman taught me how to fight” and nobody would ever suspect he was actually Batman. Disbelief isn’t just suspended, it’s bungee-jumping from space. We also, gleefully, actually get to see Joker’s ridiculous springloaded jack-in-the-box onto an exploding shark trick performed, which is hilarious with the 1960s effects used. Chris: Bruce unsuccessfully tries to find Miss Kitka for reasons that should be obvious, then apparently just decides that she’s on her own and climbs out of a window, performing a completely unnecessary but completely awesome dive into Gotham Harbor on his way. He returns to Wayne Manor (a journey that lasted long enough for his tuxedo to dry) and informs Commissioner Gordon that he escaped “with the aid of Batman.” Bruce… it’s called a secret identity. David: Gordon in this movie is not only dumb, he’s also completely useless. Does he accomplish anything other than holding a press conference? Chris: In the early episodes, there’s a real attempt made to characterize the GCPD as actually trying to capture the arch-villains, and only calling in Batman as a last resort. Then, about ten episodes in, they just completely give up and start calling that dude immediately. “I’m afraid there’s trouble, Batman! A light’s gone out in my office and Chief O’Hara misplaced the stepladder! We could be looking for it all afternoon if you don’t get here quickly!” “Sure an’ ’tis a dark day, Caped Crusader!” David: I’m surprised O’Hara never, ever made it over to the comics. Did he even appear in flashback in Morrison’s run? I know Aunt Harriet basically existed in the form of Aunt Agatha… Chris: He actually shows up in Jeph Loeb and Tim Sale’s Dark Victory, where he meets a quick and ignoble death. Chris: It’s not something I’d recommend. Anyway, while it is insane for Bruce to talk about hanging out with Batman when there were other people around to see that it was just him beating the crap out of goons, there are two things I absolutely love about this scene. First: Bruce Wayne walks into his house after being kidnapped, with his tux all torn up, and just breezes past the commissioner to tell Dick that they’re “late for a demonstration at the fish hatchery.” David: It’s a riddle, you see. They need to see a demonstration of their physical prowess at the place where the villains are being fishy. I don’t think I matched this movie’s twisted deductive insanity well, though. Chris: Second, they turn around and head to the Batcave as soon as Gordon leaves, and Burt Ward slides down the bannister. I thought that was just the coolest thing ever when I was a kid. Chris: I have no idea why it stuck with me as much as it did, but I still love it. It’s a really nice reminder that Ward’s playing a teenager. Meanwhile, back at the United Underworld, the villains decide to go with the Penguin’s plan for killing Batman, and 57 minutes into the movie, the actual plot has finally arrived. David: We finally see Schmidlapp’s invention: a demoisturizer that turns you into a pile of dust. They test it on five goons, who just stand there as the Penguin kills them all in order, without even attempting to escape after they noticed that it, well, pretty much kills you while these douchebags laugh. Chris: Ah, but does it kill them, Uzi? Or is this just the next step in their sinister plot? Besides, what do you want them to do, run away and take the chance of stepping on a hidden jack-in-the-box that tosses them into the path of an exploding barracuda? David: We’ll find out after Batman and Robin decide to return to their lair, giving us a big-screen version of the series’ infamous ninety-degree angle climbing trick! Chris: Robin and Batman give kids a strong moral lesson about the dangers of alcohol, except that they basically say that people are used to seeing crazy people in costumes because they drink a lot. So hey, kids! Don’t drink alcohol, or you’ll totally see Batman! We now come to what is probably the most famous sequence of the film, even more than the Shark-Repellant Bat-Spray. David: Some days, you just can’t get rid of a bomb, kids. The entire scene is hilarious madcap slapstick, as Batman runs around the dock with the bomb being unable to find a place to throw it to explode that wouldn’t kill some form of living thing. That’s what puts Batman above the villains: he won’t explode marine life. That’s why he’s a friend of porpoises. Chris: This scene is fantastic for a lot of reasons — the sheer number of things they throw at Batman to keep him running with that bomb, from nuns to ducklings to a couple necking to a woman with a baby carriage who seems to actually follow him around — but my favorite is that Robin pretty much tells Batman he should’ve just blown up the bar because drunks deserve to die. That kid is ice cold. David: It’s a really famous scene, but I don’t get what’s up Robin’s ass, unless they’re intentionally writing him to discourage teenage drinking for some reason. I mean, he’s a prep school kid with a playboy guardian. Chris: He also refers to them as “riff-raff,” thus confirming his status as an entitled young plutocrat. But as Batman and Robin are debating the worth of the 99%, they’re suddenly approached by the Penguin, who has disguised himself as Commodore Schmidlapp. Sort of. David: You’d think he’d make at least a token attempt to stifle the giveaway waugh. But my favorite part of this scene is that Batman and Robin figure out he’s Penguin IMMEDIATELY, and then still take him with them to the Batcave. Chris: Which is all part of his plan! That’s what’s great about this movie: Everyone from the heroes to the villains all have these crazy plans involving pretending to be someone else that are built entirely around people seeing through their lies. Batman and his radio transmitter were just a bluff to get his hands free for a beatdown, and now the Penguin’s ascot and Michael McDonald cap are just an attempt to get into the Batcave. Because this is a Batman who needs solid evidence before he starts punching out a guy who he just literally threatened to murder 10 minutes ago. David: With his mobile fingerprint lab in the Batmobile! I have to admit, I’d totally forgotten the resolution of the dehydration situation in the Batcave, but I’d also totally forgotten the absolutely incredible science Batman starts dropping at the end of the scene. I’d COMPLETELY forgotten about this bit, and I was seriously cackling in my chair, since while the movie’s been ridiculous so far, this may be Batman’s greatest monologue in the film. Chris: Yes, once he’s gained access to the Batcave, the Penguin pulls five dehydrated pirates out of his pants, adds water, and sets them to the task of killing Batman and Robin, because that is just how this movie rolls. David: That’s pretty great, but after a very short fight scene where all the goons immediately disappear and the Penguin runs away, it’s revealed that the Penguin’s used the heavy water they use to power their atomic core (seriously, Batman keeps a nuclear reactor in the Batcave) and that they’ve now become… antimatter. Chris: Here’s my question: Why in the hell is there a switch for Heavy Water from the atomic pile on the Drinking Water Dispenser?! Wayne Industries must’ve gotten so many lawsuits over this. David: I love to drink heavy water all the time! That’s how I become the Flash, right? Chris: Since he has one quarter of the United Underworld trapped in his Anti-Crime Basement, Batman decides that the best course of action would be to pretend to believe he’s Commodore Schmidlapp who has been operating under a post-hypnotic suggestion, and then let him steal the Batmobile so that they can follow him with a motorcycle that they apparently leave hidden on the side of the highway under a fake bush. Chris: Seems like a pretty good idea to me. David: Man, you really have nothing to say about the antimatter?! Or his revelation that the goons will return… in another universe? Chris: What is there to say? There are mysteries of the universe that even we, with all our science, cannot understand, old chum. David: Bruce Wayne seriously formulates the worst plans in this movie. But they always work. Chris: I think technically, that makes them the best plans. He’s like Cobra Commander that way. He and Robin take the camouflaged Batcycle to the airport, where Robin’s sidecar splits off into a “go-kart,” then hop in their helicopter to hunt down the bad guys and their submarine, where Lee Meriwether is contorting to scratch her back. David: I really, really love the four-way periscope they use with the specifically labeled sides for each one of them. Also, that Penguin’s logo is just a black triangle and a white triangle. Chris: It’s Minimalist Penguin! Gorshin is incredible in this scene, too. Of all the villains, he’s the one that’s most recognizable to the way we see him today, and the ’66 era — and this movie in particular — had a real strong hand in shaping him. He’s arrogant — he says there’s no way the Penguin could’ve succeeded in killing Batman and Robin — and manic and lives to match wits with Batman. Which, in this particular instance, involves shooting yet another ICBM at the Batcopter. David: At least they didn’t repeat the stock footage! Thankfully, however, there’s a foam rubber show going on in the area with a huge mat full of raw material for the Batcopter to safely land on. I’m serious. Chris: The only thing I find unbelievable about this scene is that FoRubWhoSaCon wasn’t made into an annual event. David: I love how the way Batman and Robin get out of deathtraps becomes more and more ridiculous with each iteration of the movie. It’s completely unafraid to pull utterly random deus ex machinas, between this and the flying porpoise. Chris: Batman and Robin are hit with two more riddles, and since we’ve already been through the mind-boggling apophenia the first time we dealt with this, I’ll skip to the highlights: Applesauce represents unification, and an egg is nature’s perfect symbol of hope for the future. Remember that next time you have breakfast. David: Not only that, but this somehow means the United World headquarters, which the villains’ submarine, now with Penguin, is approaching. There’s an AMAZING bit in this scene where Catwoman starts hissing and growling and purring and shaking her hips while looking through the periscope, and the pirate in Penguin’s spot just does this slow astonished stare at her. I seriously went back and rewatched it like three times. Chris: It is pretty amazing. While Batman and Robin run a three-minute mile to the United World headquarters, the bad guys have already arrived in their submarine with the Dehydrator. They zap the representatives from all over the world into powder, and end up escaping with the entire Security Council. Their evil plan has now come to terrifying fruition! David: The Penguin personally Solid Snakes into the United World HQ, and gases everyone until they all fall onto each other like arches. It’s amazing. I also continue to love the villains’ insistence on wearing masks in public, like anyone’s going to recognize the Joker. The world leaders are also all completely oblivious to the dehydrator’s attack because they’re too busy arguing. And, inexplicably, they all dehydrate into different colors of dust. Once they put them into capsules, they’re practically like those little foam dinosaurs that expand in water. Chris: Wouldn’t our world be a better place if international affairs were settled by size-changing dinosaurs? I think so. After sweeping the representatives up into individual test tubes, the villains make their escape when Catwoman threatens to kill Miss Kitka if Batman makes a move to stop her. So just in case you were wondering: Yes. Lee Meriwether’s hotness outweighs the actual possibility of nuclear annihilation. David: Well, duh! As the villains run to the submarine, Batman and Robin run after them in the Batboat, outrunning an ICBM pointed at them with the power of stock footage by jamming the homing frequency. The villains are somehow surprised that Batman and Robin know how to jam a missile. Chris: You have to imagine that the villains are getting worried at this point. I mean, they’ve already hit him with a nuke once, and he just keeps coming. Batman don’t shiv. David: Batman points out that they only need to make the submarine surface, not destroy it, since they don’t want to hurt the world leaders inside. Which basically means he’s totally okay with killing all four villains and their goons otherwise. Chris: Hey man, he warned them not to harm Miss Kitka. After referring to Catwoman as a “feline floozy,” which is amazing, Penguin commands the sub to dive, and Batman “circles them at full-thrust Bat-Speed,” which I’m going to assume is pretty fast. And then Robin pulls out a bazooka. Chris: Excuse me. Bat-Zooka. David: It attacks it with sonic waves, and as he shoots it we actually see the capsules with the world leaders start to topple, so Batman and Robin are being pretty terrible at their “saving world leaders” jobs. Chris: You can’t expect them to make an omelette without breaking a few eggs. Especially since Alfred does all the cooking. David: They finally get the submarine to surface, and then Batman and Robin board the boat and fight the villains, leading to a protracted fight sequence featuring the film’s only use of the show’s famous visual onomatopoeia cues. Chris: This is really the only fight scene with Batman and Robin, isn’t it? Bruce fought the gang back at their hideout, but that’s it. Chris: Any remaining doubt our readers may have had as to whether Dozier and Semple knew exactly what they was doing should be gone now. David: They finally beat everyone off the boat and chase Catwoman inside, where she trips and falls and Batman finally discovers she’s Kitka, leading to him staring at the camera for a few minutes while mournful opera music plays. “Holy heartbreak, Batman,” Robin helpfully adds. I’d punch that kid in the face. Chris: That’s another one of those things that I completely bought into as a kid. It’s a heartbreaker. Even if she was a commie. Chris: Well if there’s one thing we know about Batman, it’s that he deals pretty well with emotional trauma. David: Then Schmidlapp shows up and topples the entire container of world leader dust and sneezes on them to combine them all. Thank god Batman and Robin have a Super Nuclear Dust Separator! With a computer link that allows them to consider “various national and ethnic factors.” Robin considers changing the personalities of the world leaders and playing god to cause world peace, but Batman says man cannot play with nature. I’d totally forgotten how absolutely nuts this movie got at the end. Chris: It really does. They went all out to do an adventure that was bigger than anything they’d done on the show, and in doing so, they amped up the craziness too. Also, Batman wears his utility belt over his surgical smock. This movie is genius. David: Batman and Robin separate out the dust and rehydrate them with “light water – soft” in the United World headquarters as the entire world anticipates to see if it’s successful. Everyone has a “solemn moment” before Batman turns on a water faucet connected to a reservoir on the table leading to the dust, and then the leaders are brought back and continue arguing as if nothing had just happened. Also, the dude from Nigeria is speaking Spanish for some reason. Chris: Oh Uzi. Don’t you see? Those various national and ethnic factors got shuffled up! Maybe now they’ll have a little more understanding of each other. In the world of Batman ’66, world peace was achieved when the Joker shot the UN with a laser gun that turned them into powder that got sneezed on by an extremely gullible sea captain. David: . . . I never realized this. Oh my God. Chris: Seriously? David: Seriously, how do I understand Grant Morrison’s Batman, but not Batman ’66? Chris: You must’ve thought this was a hell of a down ending when you were a kid. David: Ha! HIGH POINTS David: Much like Dark Knight, basically every actor here, except in a different way. They’re all fantastic at the over-the-top, campy theatrical antics. Chris: I love basically everything about this movie, but as our readers might’ve been able to guess from the subtle hints I’ve dropped over the past two weeks, I’m particularly fond of Lee Meriweather, who somehow managed to be almost as gorgeous as Julie Newmar. David: Almost. To Chris Sims, Thanks for Everything, Lee Meriwether. Chris: But really, there are so many great moments: The shark, the bomb, the bad guys using nuclear missiles for airborne graffiti rather than destruction, Robin’s bizarre shift into total elitism… David: The optimistic ending of personality transplants between the leaders of the free world… This is really just a really big episode of the TV show. Every single element of it was… oversized. The fights, the locations, the running time, the number of villains. But besides the dropping of the cliffhanger, it’s very similar structurally and in style. Chris: And even then, there are plenty of cliffhanger moments in the series. We broke it up with Bruce Wayne getting abducted, but they could’ve just as easily capped a show-sized chunk of it with the torpedo heading to Batman, or the helicopter falling out of the sky. David: I dunno. Schmidlapp, maybe? It’s really hard to come up with one. The use of stock footage for the Polaris missiles and around-the-world reactions was kind of distracting, I guess, although all of this is part of the movie’s appeal to a degree. Chris: As much as I love this movie, and as dense as it is with action, it does take a hell of a long time to actually get anywhere. We dinged Batman Returns pretty hard for the seams showing in the script, and the first half hour is just as bad in that respect. The good guys figure out who the bad guys are right after the opening sequence, but then spend 20 more minutes figuring out who the bad guys are. There are places where it feels like padding that could’ve been used for more stuff with the villains interacting, which is where this movie really shines. But honestly? That’s stretching to find something that’s not great. David: The dehydrator is introduced really late in the game, as well. I mean, if I wanted to sit here and find plot holes and parts that don’t make sense I could, but that’s not really the point of the movie, it’s about … conveying a feeling. Chris: Exactly. I can’t watch this movie without being swept up in it. FINAL THOUGHTS David: This was a hoot. Chris: There’s one thing about this movie that I’m sure isn’t intentional, but if you watch this show as much as I have, there’s this big connection that almost had to be completely unintentional. David: Hrm? Chris: Well like I said, when I was a kid, I totally bought Batman’s heartbreak when Miss Kitka is revealed to be Catwoman. He’s in love with her — in a kid’s show sort of conception of love, where you can be madly taken with someone that you’ve known for three hours — and when he finds out that she’s not real, it’s so devastating. It really is like the bad guys killed Kitka, but almost worse. That’s the way that scene hit me when I first saw it, and so it stuck with me. Batman will never love again. With me so far? David: Yeah. Chris: Well, the movie aired between the first two seasons of the TV show, and if you look at it that way, you can see that it’s a turning point for Bruce Wayne. However, there’s an episode near the end of Season 2 where Bruce Wayne is dating a beautiful socialite named Lisa Carson. Batman rescues her from King Tut, and at the end of the episode, she invites him into her apartment. He tells her that he’s no good for her, that she’s wasting her time with him, and tries to leave with a handshake. She ends up kissing him goodbye, and he goes into the apartment with her for “milk and cookies” — after he turns to the camera and says “Man cannot live by crimefighting alone.” David: When here, he says that being betrayed like this is all part of a crimefighter’s lifestyle. Chris: Exactly! So it’s these two huge changes in the evolution of this portrayal of the character! And you want to know the kicker? David: What? Chris: Lisa Carson is played by Lee Meriwether. David: HA! Chris: Seriously, these are the things I think about in my spare time. David: Well, I thought the movie was a lot of fun. We can’t sit here and riff on the War on Terror in its aftermath, but it’s a tightly crafted, very well designed, very funny movie. Chris : It really is, and it deserves a lot more credit than certain fans want to give it. There was such a huge “fan” backlash against Batman ’66, when in reality, all it does is prove that Batman can work in multiple kinds of stories — and this one was for the lovers of adventure. David: No offense to the other lovers out there! Chris: And with that, we are officially done with our look at the Batman films. We hope you’ve enjoyed it, and if you have, don’t worry. We’ll be back next week with a special October surprise from the world of cinematic super-heroics. Rogues' Gallery Welcome back to ComicsAlliance It appears that you already have an account created within our VIP network of sites on . To keep your personal information safe, we need to verify that it's really you. To activate your account, please confirm your password. When you have confirmed your password, you will be able to log in through Facebook on both sites. Welcome back to ComicsAlliance It appears that you already have an account on this site associated with . To connect your existing account just click on the account activation button below. You will maintain your existing VIP profile. After you do this, you will be able to always log in to http://comicsalliance.com using your original account information.
{ "pile_set_name": "Pile-CC" }
loading/saving interactions with the databases into django.db.backend. This helps external db backend writers and removes a bunch of database-specific if-tests in django.db.models.fields. Great work from Leo Soto. git-svn-id: http://code.djangoproject.com/svn/django/trunk@8131 bcc190cf-cafb-0310-a4f2-bffc1f526a37
{ "pile_set_name": "Pile-CC" }
/* * Copyright 2009 The Closure Library Authors. All Rights Reserved. * * Use of this source code is governed by the Apache License, Version 2.0. * See the COPYING file for details. */ /* * Standard styling for buttons created by goog.ui.FlatMenuButtonRenderer. * * @author attila@google.com (Attila Bodis) * @author tlb@google.com (Thierry Le Boulenge) */ .goog-flat-menu-button { background-color: #fff; border: 1px solid #c9c9c9; color: #333; cursor: pointer; font: normal 95%; list-style: none; margin: 0 2px; outline: none; padding: 1px 4px; position: relative; text-decoration: none; vertical-align: middle; } .goog-flat-menu-button-disabled * { border-color: #ccc; color: #999; cursor: default; } .goog-flat-menu-button-hover { border-color: #9cf #69e #69e #7af !important; /* Hover border wins. */ } .goog-flat-menu-button-active { background-color: #bbb; background-position: bottom left; } .goog-flat-menu-button-focused { border-color: #bbb; } .goog-flat-menu-button-caption { padding-right: 10px; vertical-align: top; } .goog-flat-menu-button-dropdown { /* Client apps may override the URL at which they serve the sprite. */ background: url(//ssl.gstatic.com/editor/editortoolbar.png) no-repeat -388px 0; position: absolute; right: 2px; top: 0; vertical-align: top; width: 7px; }
{ "pile_set_name": "Github" }
Q: How do I add OpenCV to LD_LIBRARY path in linux? I used this link to install OpenCV. What works: 1.OpenCV works fine with python (running from terminal). 2.I can import opencv libraries in a single C++ program. What does not work : When the code is spread across multiple and you need to build it using CMake. Here's my CmakeLists.txt : 1.cmake_minimum_required(VERSION 3.9) 2.project(Image_processing) 3.set(CMAKE_CXX_STANDARD 14) 4.find_package(OpenCV REQUIRED) 5.include_directories(/home/user/opencv/build) 6.add_executable(main main.cpp) 7.target_link_libraries(project_name ${OpenCV_LIBS}) Errors (can regenerate them by commenting lines 4,5 and 7 in above CMake file): undefined reference to OpenCV functions. CMake Error at CMakeLists.txt:7 (target_link_libraries): Cannot specify link libraries for target "Image_processing" which is not built by this project. A: Correct it with: cmake_minimum_required(VERSION 3.5) project(Image_processing) set(CMAKE_CXX_STANDARD 14) find_package(OpenCV REQUIRED) include_directories(${OpenCV_INCLUDE_DIRS}) add_executable(main main.cpp) target_link_libraries(main ${OpenCV_LIBS})
{ "pile_set_name": "StackExchange" }
Eugene McCray, director of the CDC's Division of HIV/AIDS Prevention, framed the report as evidence the campaign is needed. “After a decades-long struggle, the path to eliminate America’s HIV epidemic is clear,” McCray said in a statement. “Expanding efforts across the country will close gaps, overcome threats, and turn around troublesome trends.” The CDC said rural areas, the South, and "disproportionately affected populations like African-Americans and Latinos" could most benefit from expanding efforts. “Now is the time for our Nation to take bold action. We strongly support President Trump’s plan to end the HIV epidemic in America,” CDC Director Robert R. Redfield said in a statement. “We must move beyond the status quo to end the HIV epidemic in America.” The president's proposal would increase access to medications that can treat and prevent HIV and focus prevention efforts in communities with the highest rates of HIV. Federal health officials said the government hopes to reduce diagnoses of HIV by 75 percent within five years and 90 percent within 10 years.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'We present the general structure of proper Ricci Collineations (RC) for type B warped space-times. Within this framework, we give a detailed description of the most general proper RC for spherically symmetric metrics. As examples, static spherically symmetric and Friedmann-Robertson-Walker space-times are considered.' author: - | J.Carot[^1]\ Departament de Física\ Universitat de les Illes Balears\ E-07071 Palma de Mallorca\ Spain - | L. A. Núñez [^2] and U. Percoco [^3]\ [*Centro de Astrofísica Teórica*]{}\ [*Departamento de Física, Facultad de Ciencias,* ]{}\ Universidad de los Andes, Mérida 5101, Venezuela. title: | Ricci Collineations\ for\ type B warped space-times --- Introduction ============ The purpose of this paper is to study Ricci Collineations (RC) for a certain class of space-times, namely type B warped space-times and in particular spherically symmetric space-times. Collineations are symmetry properties of space times. Katzin[* et al.* ]{}[@KatzinEtal69] define them as those vector fields, $X$, such that leave the various relevant geometric quantities in General Relativity invariant under Lie dragging in their direction. The best known examples of collineations are the [*Killing vectors*]{} ([*Motions*]{}), i.e. vectors that satisfy: $$\pounds _X\ g_{ab}=0 \label{collkill}$$ Other interesting symmetries are defined in analogously and the more frecuent cases of study have been:\ Conformal Motions: $$\pounds _X\ g_{ab}=2\sigma g_{ab} \label{collconf}$$ Affine Collineations: $$\pounds _X\ \Gamma _{ab}^c=0 \label{collafin}$$ Curvature Collineations: $$\pounds _X\ R_{abcd}=0 \label{collcurv}$$ Ricci Collineations: $$\pounds _X\ R_{ab}=0 \label{collricc}$$ Contracted Ricci Collineations: $$g^{ab}\pounds _X\ R_{ab}=0 \label{collcric}$$ Here $\pounds _X$ stands for the Lie derivative operator and the indices $% a,b,...$ run from 1 to 4. The well established connection between [*Killing Vectors* ]{} and constants of the motion has encouraged the search for general relations between collineations and conservation laws. Collineations, other than [*Motions*]{}, can be considered as non-noetherian symmetries and can also be associated to constants of the motion. [*Affine Collineations*]{} have been shown to be related to conserved quantities [@HojmanEtal86], and this property has been used to integrate geodesics of the Robertson-Walker metric [@BedranLesche86]. As far as we know, the first [*Curvature Collineation*]{} was found by Aichelburg [@Aichelburg70] for pp-wave metrics, and their relationships to first integrals of the geodesic equations extensively studied in Ref. [@KatzinEtal69]. Particular types of [*Ricci*]{} and [*Contracted Ricci Collineations*]{}, for the Robertson-Walker metric have also been found and shown to be related to the particle number conservation [@GreenEtal77]. Also, considerable attention is being paid to the related problem of symmetry inheritance in General Relativity [@ColeyTupper89]. Collineations have been studied in connection with fluid spacetimes [@GreenEtal77], [@OliverDavis77], [@TsamparlisMason90], [@Duggal92] and some specific examples have been given for the [*C-metric* ]{} [@AulestiaEtal84], Robertson-Walker Spacetimes [@NunezEtal90], and Gödel-type manifolds [@MelfoEtal92]. It is clear from the above definitions that [*Motions*]{} are particular cases of [*Affine Collineations*]{}, [*Affine Collineations*]{} are particular cases of [*Curvature Collineations*]{}, and so on. It is therefore possible to construct an “inclusion diagram” connecting these symmetries. One such diagram, that includes these and other related symmetries, is presented in Ref. [@KatzinEtal69]. A collineation of a given type is said to be[* proper*]{} if it does not belong to any of the subtypes. Clearly, in solving any collination equation, with the obvious exception of the [*Killing equation*]{}, solutions representing improper collineations can be found. Therefore, in order to relate a symmetry to a particular conservation law, and its corresponding constant of the motion, the “properness” of that collineation must be assured. Some computer algebra tools have been developed to check the properness of Ricci and other collineation vectors are under development [@MelfoNunez92] [@BertolotiEtal95a]. We assume that RCs are smooth vector fields. Although this is not necessarily so, by restricting ourselves to this case, we ensure that they form a Lie algebra with the usual bracket operation. Such an algebra naturally contains that of Special Conformal Killing Vectors (SCKV) (see reference [@ColeyTupper89]) which in turn contains that of Homothetic Vector Fields (HVF) and therefore the isometry algebra of all Killing Vector Fields (KV). Regarding the Ricci tensor, we shall consider that it is non-degenerate (i.e.: rank 4) and this in turn ensures that the Lie algebra of RC is finite dimensional, its maximal dimension being 10 (9 being forbidden by Fubini ’s theorem). For further information on issues concerning dimensionality and degenerate Ricci tensor see, for instance, references [@CarotEtal94] and [@HallEtal95]. The paper is organized as follows: in section 2 we describe the basic features of the RC in type B warped space-times, then, in section 3, we consider spherically symmetric space-times as a particular case of these, studying two distinct cases, namely: static solutions and Friedmann-Robertson-Walker space-times. Type B warped space-times ========================= Suppose that $(M_1,h_1)$ and $(M_2,h_2)$ are a pair of pseudo-Riemannian manifolds, and $\Phi$ is a real valued function on $M_1$ (warping function’), one can then build a Lorentz manifold, $(M,g)$ by setting $M=M_1 \times M_2$ and $g=\pi_1^\ast h_1 \otimes \Phi^2 \pi_2^\ast h_2$, where the functions $\pi_1$ and $\pi_2$ are the canonical projections onto the factors of the product. $(M,g)$ is then called a warped product manifold’. If dim $M=4$, we say that $(M,g)$ is a warped space-time’ and one can classify them according to the respective dimensions of the factor (sub-)manifolds $M_1$ and $M_2$. We shall refer the reader to [@CarotDaCosta93] and references cited therein for a general discussion, restricting ourselves hereafter to the case dim $M_1=$ dim $M_2=2$, namely; warped space-times of the class $B$. Although all our considerations will be local, see [@Haddow] for some remarks on globally warped space-times. It can be shown that for type $B$ warped space-times, a coordinate chart exists (adapted to the manifold product structure), such that the metric takes the form $$\label{warped} ds^2=h_{AB}\left( x^D\right) \ {\rm d}x^A\ {\rm d}x^B+ \Phi ^2\left( x^D\right) \ h_{\alpha \beta }\left( x^\gamma \right) \ {\rm d}x^\alpha \ {\rm d}x^\beta$$ where the indices $A,B,\ldots $ run from 1 to 2 and $\alpha ,\beta , \ldots $ from 3 to 4. The functions $h_{AB}$ and $h_{\alpha \beta }$ are the component forms of $\pi_1^\ast h_1$ and $\pi_2^\ast h_2$ in the local charts $\{ x^A \}$ and $\{ x^\alpha \}$, which are in turn adapted to $M_1$ and $% M_2 $ respectively. The Ricci tensor of such a space-time takes then the following component form in the above chart: $$\label{ricci1} R_{AB}=\frac 12R_1\ h_{AB}-\frac 2\Phi \ \Phi _{A;B}\ ,$$ $$\label{ricci2} R_{A\alpha }=0\ ,$$ $$\label{ricci3} R_{\alpha \beta }=\frac 12\left( R_2-\left( \Phi ^2\right) _{;A}^A\right) h_{\alpha \beta }\ \equiv \ Fh_{\alpha \beta } \ ;$$ where $F=\frac 12\left( R_2-\left( \Phi ^2\right) _{;A}^A\right)$ and $R_1$ and $R_2$ are the Ricci scalars associated to the 2-metrics $h_1$ and $h_2$. The semi-colon indicates, as usual, the covariant derivative with respect to the space-time metric. Let now $X$ be a RC on $M$, and define its vertical and horizontal components, $X_1$ and $X_2$, as follows (see [@CarotDaCosta93]): $$X_1^a\equiv g^{ab}\left( \pi _1^{*}h_1\right) _{bd}X^d\ \ \ \ X_2^a\equiv X^a-X_1^a \label{components}$$ In the above adapted chart, one readily sees that $X_1^A=X^A,$ $\ X_1^\alpha =0\ $, and $X_2^A=0,\ X_2^\alpha =X^\alpha \ $. On account of (\[ricci1\]), (\[ricci2\]) , ( \[ricci3\]) and (\[components\]), equation (\[collricc\]) is now equivalent to: $$\label{set} R_{AB,D}X_1^D + R_{AD}X^D_{1,B} + R_{DB}X^D_{1,A}=0 \ ,$$ $$\label{vuit} R_{AD}X^D_{1,\alpha} + Fh_{\alpha \beta}X^\beta_{2,A}=0 \ ,$$ $$\pounds _{X_2}h_{\alpha \beta }=2\Psi h_{\alpha \beta } \label{nou}$$ where $$\Psi =-\frac 12{\frac{F_{,D}X_1^D+F_{,\gamma }X_2^\gamma }F} \label{deu}$$ Take now $p_1\in M_1$ and consider the manifold ${\tilde{M}}_2\equiv \{p_1\}\times M_2\cong M_2$ (see [@CarotDaCosta93]), equation (\[nou\]) is then a statement that $X_2$ is a Conformal Killing Vector (CKV) of $({% \tilde{M}}_2,h_2)$, and therefore it can be re-written as $$X_{2\alpha /\beta }+X_{2\beta /\alpha }=2\Psi \ h_{\alpha \beta } \label{x2conf1}$$ where a stroke denotes the covariant derivative associated with the metric $% h_2$. Furthermore, it is possible to write [@Hall90], $$\pounds _{X_2}R_{2\alpha \beta }=-2\Psi _{\alpha /\beta }-\left( h^{\mu \nu }\Psi _{\mu /\nu }\right) h_{\alpha \beta } \label{x2conf2}$$ where $$R_{2\alpha \beta }~=~\frac{R_2}2\ h_{\alpha \beta } \label{Escricci2}$$ is the Ricci tensor of the metric $h_2$ . In addition, the Conformal Bivector associated to $X_2$ , i.e. $$F_{\alpha \beta }\equiv X_{2\alpha /\beta }-X_{2\beta /\alpha } \label{Bivector}$$ satisfies $$F_{\alpha \beta /\gamma }=\frac{R_2}2(h_{\alpha \gamma }\ X_{2\beta }-X_{2\alpha }\ h_{\beta \gamma })-\Psi _\alpha \ h_{\beta \gamma }+\Psi _\beta \ h_{\alpha \gamma } \label{x2conf3}$$ Now, from (\[x2conf2\]) one obtains $$\Psi _{\alpha /\beta }=\lambda h_{\alpha \beta }\qquad {\rm and}\qquad \lambda \equiv -\frac 18(\pounds _{X_2}\ R_2+2\Psi \ R_2) \label{x2conf4}$$ and from the Bianchi identities (on $({\tilde{M}}_2,h_2)$) for $\Psi \ $, it readily follows: $$\lambda _{,\gamma }=-\frac{R_2}2\Psi _{,\gamma } \label{lambdaderiv}$$ Furthermore, taking a further covariant derivative in the above expression, skewsymmetrising, and equating to zero, one has $$-\left( \frac{R_2}2\right) _{,\alpha }=\sigma \Psi _{,\alpha } \label{R2deriv}$$ for some function $\sigma $. To proceed with our study, it is useful to consider now the following decomposition of ${\tilde M}_2$; ${\tilde M}_2= H \cup K \cup C$, where $H$ is that open submanifold of ${\tilde M}_2$ on which $\Psi_{\alpha / \beta} \not{= }0$ (hence $\lambda \not{= }0$ and $\Psi^\alpha \Psi_\alpha \not{=}0$ on $H$), $K$ is the interior of that set of points for which $\Psi_{\alpha / \beta} = 0 \ $, and $C$ is a set with no interior defined by the decomposition itself. We shall first study what happens in $K$. Since $\Psi_{\alpha / \beta} = 0 $ there, it follows that $\Psi_{, \alpha}$ is either zero on $K$ (in which case $X_2$ is homothetic), or else it is a (gradient) Killing vector (and then $X_2$ is an SCKV), the Bianchi identities then implying $R_2=0 \ $, i.e.; $h_2 \vert_K$ is flat. In the latter case ($h_2$ flat), one can always choose coordinates on $K$, say $\{ x,y \} \ $, such that $\Psi \vert_K = A x$ ($A =$ constant), and integrating out the conformal equations (\[nou\]) for $X_2$ on $K$ it follows $$\label{sckvK} X_2= \left( \frac 12 A(x^2-y^2) - D y + L \right) \partial_x + \left( Axy + Dx + E \right) \partial_y$$ where $A, \ D, \ E$ and $L$ are constants on $K$ which will, in general, depend on the chosen $p_1 \in M_1$, and therefore, when considering $X_2$ on $M$, one will have that all of them are functions of the coordinates set up in $M_1$, thus $A=A(x^B), \ D=D(x^B), \cdots$ to be determined, along with the vertical component $X_1$ of $X$, from (\[set\]) and (\[vuit\]). In fact, it is easy to see from (\[vuit\]) that $A$ and $D$ must be constants, say $A=A_0$ and $D=D_0$, and from the expression (\[deu\]) of $% \Psi$ with $R_2=0 \ $, together with $\Psi=A_0x $, it follows that $E$ must also be constant (which can be set equal to zero without loss of generality), then from (\[set\]) $X_1^A=P^A(x^B)x+Q^A(x^B)$, and therefore one has, on $M \cap K$ and if $R_2=0$: $$\label{2flat1} X= ( P^Ax+Q^A ) \partial_A + ( \frac {A_0}2 (x^2-y^2) - D_0 y + L) \partial_x + ( A_0 xy + D_0 x ) \partial_y$$ $$\label{2flat2} \Psi=A_0x$$ $P$, $Q$ and $L$ being functions of the coordinates $\{ x^B \} $ on $M_1$ to be determined from (\[set\]) and (\[vuit\]). If $\Psi _{,\alpha }|_K=0$, $X_2$ is an HVF, and therefore ([@Hall90]) $% \pounds _{X_2}R_2=-2\Psi R_2$ if $R_2\neq $ constant or $\pounds _{X_2}R_2=0$ if $R_2=$ constant, hence (\[deu\]) implies $$\Psi =-\frac 12{\frac{(\Phi ^2)_{;AD}^AX_1^D}{(\Phi ^2)_{;A}^A}} \label{psinova}$$ thus, given a basis of the homothetic algebra of $(M_2,h_2)$, say $\{\zeta _I\}$ with $I\leq 4$, one will have $X_2=C^I\zeta _I$ on $({\tilde{M}}% _2,h_2) $, the $C^I$ being constants which will in general depend on the chosen $p_1\in M_1$, and again, when considering $X_2$ on $M$, they will become functions of the coordinates in $M_1$, to be determined as before from (\[set\]) and (\[vuit\]). It is worth noticing that, whenever a proper HVF exists in $({\tilde{M}}_2,h_2)$, say $\zeta _1$ then (\[nou\]) implies that $C^1=\Psi $. It will be shown later on that, in all cases but one, the functions $C^I$ must in fact be constants (and (\[vuit\]) then implies that $X_1$ is just a vector field on $M_1$). Thus, we conclude that whenever $\Psi _{,\alpha }=0$, one has $$X=X_1^A(x^B,x^\gamma )\partial _A+C^I(x^B)\zeta _I \label{homot}$$ where $C^I$ and $X_1^A$ are functions of their arguments, to be determined from (\[set\]) and (\[vuit\]), and $\{\zeta _I\}$ with $I\leq 4$ form a basis of the homothetic algebra of $({\tilde{M}}_2,h_2)$. Let us next study what happens on $H$. Notice that (\[x2conf4\]) can be rewritten as $\pounds _Yh_{\alpha \beta }=2\lambda h_{\alpha \beta }$ with $% Y_\alpha =\Psi _{,\alpha }$; thus, $Y$ is also a CKV of $({\tilde{M}}_2,h_2)$ with conformal factor $\lambda $, and one therefore has [@Hall90]: $$\pounds _YR_{2\alpha \beta }=-2\lambda _{\alpha /\beta }-\left( h^{\mu \nu }\lambda _{\mu /\nu }\right) h_{\alpha \beta } \label{lambdaH}$$ which, on account of (\[Escricci2\]) and (\[x2conf4\]), can be rewritten as $$\lambda _{\alpha /\beta }=-\frac 18\left( R_{2,\alpha }\Psi ^\alpha +2\lambda R_2\right) h_{\alpha \beta }\equiv \Sigma h_{\alpha \beta } \label{asterisc}$$ that is: $Z$ such that $Z_\alpha \equiv \lambda _{,\alpha }$ is a (gradient) CKV, colinear with another CKV, namely $Y$; it is then immediate to show, taking into account (\[x2conf4\]), (\[lambdaderiv\]), (\[R2deriv\]) and (\[asterisc\]), that $R_2$ must be constant ($\sigma =0$ in (\[R2deriv\])), (\[x2conf4\]) then reading $$\Psi _{\alpha /\beta }=-\frac{R_2}4\Psi h_{\alpha /\beta }$$ The Bianchi identities specialized to $\Psi _{,\alpha }$ then imply one of the following: 1. $R_2=0$ and $\Psi _{,\alpha }\not{=}0\ $, one then has the expression (\[sckvK\]) for $X_2$, etc. 2. $R_2=$ constant ($\not{=}0$) and $\Psi _{,\alpha }=0\ $, $X_2$ is then an HVF of $\left( {\tilde{M}}_2,h_2\right) \ $, but since $R_2$ is constant and non-zero, it must be a KV, i.e.; $\Psi =0$. 3. $R_2=0$ and $\Psi _{,\alpha }=0\ $, $X_2$ is an HVF of $\left( {% \tilde{M}}_2,h_2\right) \ $, possibly non-Killing. Notice that whenever $\Psi_{,\alpha} = 0 \ $, one gets the same results as when studying this case in $K \subset {\tilde M}_2$, i.e.; equations (\[psinova\]) and (\[homot\]) hold. We can roughly summarize the results so far obtained as follows: The horizontal component $X_2$ of a RC $X$ is either an HVF of $({\tilde M}% _2,h_2)$ (i.e.; $\Psi_{,\alpha}=0$) and $X$ is then given by (\[homot\]), or else it is a proper SCKV of $({\tilde M}_2,h_2)$ (that is; $% \Psi_{,\alpha} \not{= }0$, $\Psi_{\alpha / \beta}=0$), this being possible only when $R_2=0$ (i.e.; $({\tilde M}_2,h_2)$ flat), and in that case $X$ takes the form given by (\[sckvK\]). In both cases, the functions appearing in (\[homot\]) and (\[sckvK\]) must satisfy (\[set\]) and (\[vuit\]). We shall next focus our attention on the case $X_2$ homothetic, studying the various cases that may arise in connection with the different structures and dimensions of the homothetic algebra of $({\tilde M}_2, h_2)$. To this end, let ${\cal H}_r$ be the homothetic algebra of $({\tilde M}_2, h_2)$, $r$ being its dimension. Since dim $M_2=2$ it follows that $r$ can only be $0, \, 1, \, 2, \, 3$ or $4$. We shall deal separately with all these cases assuming, for the sake of simplicity, that $h_2$ is Riemannian (similar conclusions hold if $h_2$ is Lorentz). 1. $r=0$. In this case no HVFs exist (including KVs), and therefore $% \Psi =C^I=0$, i.e.; $X_2=0$ and $X=X_1$ with $X_{1,\alpha }^D=0$ as a consequence of (\[vuit\]), that is: $X$ is a vector field on $M_1$ which must satisfy (\[set\]) and (\[psinova\]) with $\Psi =0$. 2. $r=1$. There are now two cases to be distinguished, depending on whether a proper HVF exists or not. 1. A proper HVF $\zeta $, exists in $({\tilde{M}}_2,h_2)$. It is easy to see that one can then always choose coordinates, say $x$ and $y$, such that the line element $d\sigma ^2$ associated with $h_2$, and the HVF $\zeta $ read in these coordinates $$d{\sigma }^2=e^{2y}(dx^2+h^2(x)dy^2)\,\,\,{\rm and}\,\,\zeta =\partial _y$$ the associated Ricci scalar is $R_2=-2e^{-2y}h^{-1}h^{\prime \prime }$ (a prime denoting derivative with respect to $x$), and (\[vuit\]) then implies: $$R_{AD}X_{1,x}^D=0\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,R_{AD}X_{1,y}^D=-F\Psi _{,A}e^{2y}h^2(x)$$ which cannot be fulfilled unless $(Fh^2(x))_{,x}=0$, i.e.; $h(x)={\rm % constant}$, in which case $R_2=0$ and therefore $r=4$. Thus, $\Psi _{,A}=0$ ($\Psi =$ constant $\not{=}0$), $X_{1,\alpha }^D=0$ and then $% X=X_1^A(x^D)\partial _A+\Psi \zeta $ with $X_1$ satisfying (\[set\]) and (\[psinova\]) with $\Psi =$ constant ($\not{=}0$). 2. No proper HVF exists in $({\tilde{M}}_2,h_2)$, just a KV, say $\xi $. It then follows that $\Psi =0$ necessarily, and again coordinates may be chosen such that $$d{\sigma }^2=dx^2+h^2(x)dy^2\,\,\,\,\,\,{\rm and}\,\,\,\,\,\xi =\partial _y$$ the Ricci scalar is then $R_2=-2h^{-1}h^{\prime \prime }$, and (\[vuit\]) implies, as in the previous case, $(Fh^2(x))_{,x}=0$, which in turn can be seen to imply $$\left( \Phi ^2\right) _{;A}^A=2a\,\,\,\,,\,\,\,\,(a={\rm constant})$$ $$h^{\prime \prime }+ah^2=b\,\,\,\,,\,\,\,\,(b={\rm constant})$$ Performing now the coordinate change $h(x)\equiv z$, the above line element reads $$d{\sigma }^2={\frac{dz^2}{2C+z^2+2\log z}}+z^2dy^2$$ Hence (\[vuit\]) implies $X_1^A=P^A(x^D)y+Q^A(x^D)$ and then $$X=\left( P^A(x^D)y+Q^A(x^D)\right) \partial _A+C(x^D)\xi \label{excepcional1}$$ where $P^A(x^D)$ and $Q^A(x^D)$ must both satisfy (\[set\]) separately, and $C(x^D)$ must be such that $R_{AD}P^D=-bC_{,A}$. 3. $r=2$, ${\cal H}$ must then contain at least one proper HVF, since otherwise (${\cal H}$ spanned by two KVs) a third KV would necessarily exist, hence dim ${\cal H}=3$. Suppose then that a proper HVF, $\zeta $, exists; the other vector in the basis of ${\cal H}$ can always be chosen to be a KV, say $\xi $, and there are two possible, non-isomorphic, Lie algebra structures for ${\cal H}$, namely $[\xi ,\zeta ]=0$ (abelian), and $[\xi ,\zeta ]=\xi $ (non abelian). In the abelian case, coordinates may be chosen such that the line element, $\zeta $ and $\xi $ read respectively $$d\sigma ^2=dx^2+x^2dy^2\,\,,\,\,\zeta =x\partial _x\,\,,\,\,\xi =\partial _y$$ but it then follows that $R_2=0$ and therefore two other KVs exist, $r$ thus being 4, therefore this case cannot arise. In the non-abelian case, and again by means of a suitable choice of coordinates, one has: $$d\sigma ^2=dx^2+x^{2{\frac{n-1}n}}dy^2\,\,,\,\,\zeta =nx\partial _x+y\partial _y\,(n\not{=}1)\,\,,\,\,\xi =\partial _y$$ but then (\[vuit\]) implies, as in previous cases, that $\left( Fx^{2{% \frac{n-1}n}}\right) _{,x}=0$, which can not be satisfied, therefore $\Psi _{,A}=C_{,A}=0$ (i.e.; $\Psi $ and $C$ constants) and then $X_{1,\alpha }^D=0 $, and again $X=X_1^A(x^D)\partial _A+\Psi \zeta $ with $X_1$ satisfying (\[set\]) and (\[psinova\]) with $\Psi =$ constant ($\not{=}0$). 4. $r=3$ If a proper HVF exists in $({\tilde{M}}_2,h_2)$, the associated Killing subalgebra is then of dimension 2, and therefore a third KV exists, hence dim ${\cal H}=4$ and therefore this case is not possible. If, on the other hand, no proper HVFs exist, $({\tilde{M}}_2,h_2)$ is of constant curvature and $\Psi =0$ necessarily. Let $\{\xi _J\}\,,\,\,J=1,2,3$ be three KVs spanning ${\cal H}$, from (\[vuit\]) it follows $R_{AD}X_{1,\alpha }^D=-FC_{,A}^J\xi _{J\alpha }$, differentiating with respect to $x^\beta $, skewsymmetrising and equating to zero, one has $$C_{,A}^J\xi _{J[\alpha ,\beta ]}=0$$ that is; either $C_{,A}^J=0$ or else $({\tilde{M}}_2,h_2)$ contains a gradient KV. From [@KramerEtal80], it is easy to see that the later is only possible if $R_2=0$, but in that case a proper HVF is always admitted (namely $\zeta =x\partial _x$ in the coordinates used in [@KramerEtal80]), and therefore dim ${\cal H}=4$. 5. $r=4$. In this case $({\tilde{M}}_2,h_2)$ is flat, the line element and KVs being those given in [@KramerEtal80] and the proper HVF $\zeta =x\partial _x$. Proceeding as before, one can readily see from (\[vuit\]) that $X_1^A=M^A(x^D)x^2+(P_1^A(x^D)\cos y+P_3^A(x^D)\sin y)x+Q^A(x^D)$, but since $\Psi \not{=}0$ and $\Psi _{,\alpha }=0$, it follows that $% P_1^A=P_3^A=M^A=0$, which in turn imply $\Psi _{,A}=C_{,A}^1=C_{,A}^2=C_{,A}^3=0$, hence $X_{1,\alpha }^A=0$, that is: $% X_1 $ is a vector field on $M_1$ that has to satisfy (\[set\]) and (\[psinova\]) with $\Psi =$ constant ($\not{=}0$), and $X=X_1^A(x^D)\partial _A+\Psi \zeta +C^J\xi _J$. Our purpose in the next sections is to apply the results so far obtained to the case of spherically symmetric space-times which are also static, as well as to Friedmann-Robertson-Walker (FRW) models. Spherically symmetric space-times ================================= We next specify the above results to the case of a general spherically symmetric spacetime whose metric, in the local chart $\{x^{0,1,2,3}~= ~t,r, \vartheta ,\phi \}$ takes the form [@KramerEtal80] $$\label{MetricShearF} ds^2=\ -{\bf e}^{2\nu (t,r)}\,{\rm d}t^2+\,{\bf e}^{2\lambda (t,r)}\, {\rm d} r^2+\,r^2({\rm d}\vartheta ^2+\,\sin ^2\phi {\rm \,d}\phi ^2)$$ Comparing the metric (\[warped\]) with the above (\[MetricShearF\]), we have $\{x^A~=~t,\ r\ ;\ x^\alpha =\vartheta ,\ \phi \}$ ; $\Phi ~=~r$ ; $$h_{AB}\left( t,r\right) {\rm d}x^A{\rm d}x^B~=~\,-{\bf e}^ {2\nu (t,r)} {\rm % d}t^2~~+~~\,{\bf e}^{2\lambda (t,r)} \,{\rm d}r^2$$ and $$h_{\alpha \beta }{\rm d}x^\alpha {\rm d}x^\beta ~=~{\rm d} \vartheta ^2~+~\sin ^2\phi {\rm \,d}\phi ^2\$$ Thus, the Ricci tensor can be written as $$\label{sphricci1} R_{tt}=-\frac 12R_1{\bf e}^{2\nu (t,r)}\ +\frac{2\nu ^{\prime }}r\ {\bf e} ^{2\left( \nu (t,r)-\lambda (t,r)\right) }$$ $$\label{sphriccitr} R_{t\ r}=\frac 2r\ \dot{\lambda}$$ $$\label{sphricci2} R_{rr}=\frac 12\ R_1\ {\bf e}^{2\lambda }+\frac{2\lambda ^{\prime }}r$$ and $$\label{sphricci3} R_{\alpha \beta }=\left\{ 1-{\bf e}^{-2\lambda }\left[ 1+r\ \left( \nu ^{\prime }-\lambda ^{\prime }\right) \right] \right\} h_{\alpha \beta }$$ where a dash and a dot indicate, as usual, partial derivatives with respect to $r$ and $t$ respectively. As above, $R_1$ is the Ricci scalar associated with the 2-dimensional metric $h_{AB}$, and now $\frac 12~R_2~=~1$. According to our foregoing discussion, any RC $X$ must be of the form $$\label{rc} X=X_1+C^J \xi _J$$ where $\left\{ \xi _J,\ \ J=1,2,3\right\} $ are the KV ’s that implement the spherical symmetry, $X_1=X^A(t,r)\partial _A$ and $C^J$ are constants, $% J=1,\ 2, \ 3$, which can be set equal to zero without loss of generality (since $C^J \xi_J$ is a KV of the space-time and therefore a trivial RC). On the other hand, since $\Psi=0$ and $(\Phi^2)^A_{;A}= 2{\bf e}^{-2\lambda } \left[ 1+r\ \left( \nu ^{\prime }-\lambda ^{\prime }\right) \right]$, (\[psinova\]) implies $$\label{psi1} \{{\bf e}^{-2\lambda } \left[ 1+r\ \left( \nu ^{\prime }-\lambda ^{\prime }\right) \right] \}_{,D} X^D =0$$ therefore, the proper RCs of a spherically symmetric space-time whose Ricci tensor is non-degenerate, are of the form $$X=X^t(t,r) \partial_t + X^r(t,r) \partial_r$$ and they must satisfy (\[psi1\]) in addition to (\[set\]) specialised to the Ricci tensor components given by (\[sphricci1\]), (\[sphriccitr\]) and (\[sphricci2\]). We shall next present two examples: static spherically symmetric space-times, and FRW spacetimes. Static spherically symmetric space-times ---------------------------------------- Let us consider first the case of static spherically symmetric spacetimes, these are described by (\[MetricShearF\]) where the functions $v$ and $% \lambda $ appearing in it depend just on $r$ , $\partial _t$ thus being a KV. For the purpose of this paper it is convenient to write the components of the Ricci tensor for this metric as follows [@BokhariQadir93], [@JamilEtal94], [@FaridEtal95] $$R_{tt}\equiv A(r)\quad R_{rr}\equiv B(r)\quad R_{\theta \theta }\equiv C(r)\quad {\rm and}\quad R_{\phi \phi }\equiv \sin ^2\theta \ R_{\theta \theta } \label{Ricciform}$$ Taking now into account the results of the previous section one has $$\label{GeneralColl} X =X ^t\left( t,r\right) \partial _t+X ^r\left( t,r \right) \partial _r$$ and the (non-trivial) equations arising from (\[set\]), are simply: $$\label{LieRicci00} A^{\prime }(r)X ^r+2A(r)X _{,t}^t=0$$ $$\label{LieRicci01} A(r)X _{,r}^t+B(r)X _{,t}^r=0$$ $$\label{LieRicci11} B^{\prime }(r)X ^r+2B(r)\ X _{,r}^r=0$$ $$\label{LieRicci22} C^{\prime }(r)X ^r=0$$ Equation (\[LieRicci22\]) directly implies: $$\label{c'=0} C^{\prime }(r)=0$$ since otherwise one would have $X ^r=0$ that would imply, from the remaining equations, $X ^t=$ constant, thus being a KV and not a proper RC. A direct integration of equation (\[LieRicci11\]) gives $$\label{tempXi1} X ^r=\frac{{\cal K}(t)}{\sqrt{\left| B(r)\right| }}$$ Now, substituting this result back into eq. (\[LieRicci00\]) and (\[LieRicci01\]), differentiating them with respect to $t$ and $r$, respectively; and equating the crossed derivatives of $X ^t$, we obtain $$\label{temp11Xi} {\cal K}_{,tt}\ \frac{\sqrt{\left| B(r)\right| }}{A(r)}=\frac 12 {\cal K} \left( \frac{A^{\prime }(r)\ }{A(r)\sqrt{\left| B(r)\right| }} \right) ^{\prime }$$ and the following two cases arise: **Case I** ---------- $$\label{temp12Xi} {\cal K}_{,tt}\ -\epsilon \ k^2{\cal K}=0;\qquad k={\it Const},\quad \epsilon =\pm 1$$ therefore $$\label{temp13Xi} {\cal K}(t)=\left\{ \begin{array}{cc} a{\bf e}^{kt}+b{\bf e}^{-kt} & \epsilon =+1 \\ a\sin kt+b\cos kt & \epsilon =-1 \end{array} \right| \quad$$ and $$\label{temp14Xi} 2\epsilon \ k^2\ \frac{\sqrt{\left| B(r)\right| }} {A(r)}=\left( \frac{ A^{\prime }(r)\ }{A(r)\sqrt{\left| B(r)\right| }} \right) ^{\prime }$$ Substituting these results back into (\[LieRicci00\]), integrating and plugging them back into (\[LieRicci01\]), we find $$\label{temp15Xi} X ^t=-\frac 12 \left( \frac{A^{\prime }(r)\ }{A(r)\sqrt{\left| B(r) \right| } }\right) M(t)$$ where $M(t)= \int {\cal K}(t) dt$. and the constant of integration has been set equal to zero without loss of generality. Thus, for this case a proper RC is of the form: $$\label{Xisolcas1} X =-\frac 12\left( \frac{A^{\prime }(r)\ }{A(r)\sqrt{\left| B(r) \right| }} \right) \left( \int {\cal K}(t){\rm d}t\right) \partial _t+ \frac{{\cal K}% (t) }{\sqrt{\left| B(r)\right| }}\partial _r$$ where ${\cal K}(t)$ is given by (\[temp13Xi\]), and the components of the Ricci tensor must satisfy (\[c’=0\]) and (\[temp14Xi\]). **Case II** ----------- $$\label{II-1} {\cal K}={\cal S}_1t+{\cal S}_2\qquad {\cal S}_1, \ {\cal S}_2 = {\it Const}$$ and $$\label{II-2} \frac 12\frac{A^{\prime }(r)\ }{A(r)\sqrt{\left| B(r)\right| }}= \sigma =% {\it Const}$$ Then from (\[tempXi1\]) and (\[LieRicci00\]-\[LieRicci11\]), one gets after some straightforward calculations: $$\label{temp30Xi} X =\left\{ -\sigma \left( \frac 12{\cal S}_1\ t^2+{\cal S}_2 \ t \right) + \frac{{\cal S}_1}{2\,\sigma }\frac 1{A(r)}\right\} \, \partial _t+\left( {\cal S}_1\ t+{\cal S}_2\ \right) \frac 1 {\sqrt{B(r)}}\,\partial _r$$ As an example of a space-time satisfying the above requirements [@BertolotiEtal95b], take for instance $$\nu (r)=\frac 12\left( \frac{r^4}{8r_0^2}+h\,\ln \frac r{r_0}+k\right) \quad {\rm and\quad }\lambda (r)=\nu (r)+\,\ln \frac r{r_0} \label{metricelem}$$ therefore the Ricci components can be written $$B(r)=2\frac{h+1}{r^2}\qquad C(r)=1\quad {\rm and}\quad A(r)={\em Const} \label{b}$$ where $r_0$, $h$ and $k$ are constants, and we have for the RC: $$X^t=-\,c_4\sqrt{2\,\left( h+1\right) }\,\ln r+c_0\quad {\rm and}\quad X^r=% \frac{c_4\,t+c_5}{\sqrt{2\,\left( h+1\right) }}r \label{Collstatic}$$ This result invalidates a misleading theorem stated in reference [@JamilEtal94], and used in [@FaridEtal95]. According to this “theorem”, this collineation vector (\[Collstatic\]) should represent an isometry; however it is easy to see that $X$ does not reduce to a KV unless $% c_4~=~c_5~=~0$. Since all KV’s are naturally RC’s and these (if assumed smooth) form a Lie algebra under the usual bracket operation, the Lie bracket of the above RC’s with the four KV’s the metric admits, must yield in turn RC’s; thus $$\left[ \xi _I,\xi \right] =0\qquad \forall I=1,2,3$$ where $\xi _I$ designate the KV’s implementing the spherical symmetry, and $$\left[ \partial _t,X\right] =X^{\prime }\left( \neq 0\right)$$ where $X^{\prime }=\frac{c_{4\,}r}{\sqrt{2(h+1)}}$ is also a proper RC. A more detailed account of RCs for non-static spherically symmetric space-times will be given in a forthcoming paper. FRW space-times. ---------------- As an example of RC for non-static spherically symmetric metrics, we consider FRW space-times described by [@Weinberg72]: $$\label{frw} {\rm d}s^2=-\ {\rm d}t^2+R\left( t\right) ^2\left( \frac{{\rm d}r^2} {1-k\,r^2 }+r^2\,{\rm d}\vartheta ^2+r^2\sin ^2\vartheta \ {\rm d} \phi ^2\right)$$ Again, using the above notation, we have $\Phi =rR(t)$, $$\label{subspace1} h_{AB}\,{\rm d}x^A\,{\rm d}x^B=-\ {\rm d}t^2+R\left( t\right) ^2 \frac{{\rm d% } r^2}{1-k\,r^2}$$ and $$\label{subspace2} h_{\alpha \beta }\,{\rm d}x^\alpha \,{\rm d}x^\beta =\, {\rm d}\vartheta ^2+\sin ^2\vartheta \ {\rm d}\phi ^2$$ Then the Ricci tensor takes the form $$\begin{aligned} &&R_{tt}=-3\frac{\ddot{R}}R \nonumber \\ &&R_{rr}=g_{rr}\frac \Delta {R^2} \label{ricci-frw} \\ &&R_{\alpha \beta }=g_{\alpha \beta } \frac \Delta {R^2} \nonumber \\ &&\Delta =2k+2\dot{R}^2+R\ddot{R} \nonumber\end{aligned}$$ Specializing (\[set\]) to the present case, we obtain $$\begin{aligned} &&X ^tR_{rr,t}+X ^rR_{rr,r}+2R_{rr}X _{,r}^r=0 \nonumber \\ &&X ^tR_{tt,t}+2R_{tt}X _{,t}^t=0 \label{rc-frw} \\ &&R_{tt}X _{,r}^t+R_{rr}X _{,t}^r=0 \nonumber \\ &&X ^tR_{\theta \theta ,t}+X ^rR_{\theta \theta ,r}=0 \nonumber\end{aligned}$$ Thus, we get [@NunezEtal90] $$\begin{aligned} X ^t &=&c\left( 1-kr^2\right) ^{1/2}\left| R_{00}\right| ^{-1/2} \nonumber \\ X ^r &=&g(t)\,r\,\,\left( 1-kr^2\right) ^{1/2} \label{psi-frw}\end{aligned}$$ where $g(t)=-\,c\,\left| R_{00}\right| ^{-1/2}\left( \dot \Delta /2\Delta \right) $ , and $c$ is a constant. ACKNOWLEDGEMENTS ================ Two of us (J.C. and U. P.) gratefully acknowledge funding from Postgrado en Astronomía y Astrofísica as well as the warm hospitality of the Laboratorio de Física Teórica, Universidad de Los Andes Mérida, Venezuela, where most of this work was carried out. J.C. acknowledges partial financial support from STRIDE program, Research Project No. STRDB/C/CEN/509/92. The authors wish also to thank the staff of the [*SUMA*]{}[* *]{}, the computational facility of the Faculty of Science (Universidad de Los Andes). [99]{} G.H. Katzin, J. Levine and W.R. Davis, [*J. Math. Phys.*]{} [**10**]{}, 617 (1969). Hojman, S., Núñez, L., Patiño, A., and Rago, H. [*J. Math. Phys*]{} [**27**]{}, 281 (1986). Bedran, M.L.,and Lesche, B. [*J. Math. Phys*]{} [**27**]{}, 2360 (1986). Aichelburg, P. (1970) [*J. Math. Phys.*]{} [**11**]{}, 2458. L.H. Green, L.K. Norris, D.R. Oliver and W.R, Davis, [*Gen. Rel. Grav.*]{} [**8**]{}, 731 (1977). Coley, A.A., and Tupper, B.O.J. (1989) [*J. Math. Phys.*]{} [**30**]{}, 2616. D.R. Oliver and W.R, Davis, [*Gen. Rel. Grav.*]{} [**8**]{}, 905 (1977). M. Tsamparlis and D.P. Mason, [*J. Math. Phys.*]{} [**31**]{}, 1707 (1990). K.L. Duggal, [*J. Math. Phys.*]{} [**33**]{}, 2989 (1992). L. Aulestia, L. Núñez, A. Patiño, H. Rago and L. Herrera, [*Nuov. Cim.*]{} [**B 80**]{}, 133 (1984). L. Núñez, U. Percoco and V. M. Villalba,[* J. Math. Phys.*]{} [**31**]{}, 137 (1990). A. Melfo, L. A. Núñez, U. Percoco and V. Villalba, [*J. Math. Phys.*]{} [**33**]{}, 2558 (1992). A. Melfo and L.A. Núñez, [*Gen. Rel. Grav.* ]{} [**24**]{}, 1125 (1992). R. Bertolotti, L. A. Núñez and U. Percoco “Computer Algebra and Collineation Vectors in General Relativity” [*Preprint*]{} Laboratorio de Física Teórica, Universidad de los Andes (1995). J. Carot, J. da Costa and E. G. L. R. Vaz, [*J. Math. Phys.*]{} [**35**]{}, 4832 (1994). G.S. Hall, [*J. Math. Phys.*]{}, [**31**]{}, 1198, (1990) G.S. Hall, I. Roy and E. G. L. R. Vaz, “Ricci and Matter Collineations in Spacetimes” [*Preprint*]{} University of Aberdeen (1995). J. Carot and J. da Costa [*Class. Quantum Grav.*]{} [**10**]{}, 461 (1993). B. Haddow and J. Carot [*Class. Quantum Grav.*]{}, [**13**]{} (1996) 289 D. Kramer, H. Stephani, E. Hearlt, and M. A. H. MacCallum,[* Exact Solutions of Einstein Field Equations*]{} (Cambridge University, Cambridge, 1980). A. H. Bokhari, and A. Qadir, [*J. Math. Phys.*]{} [**34**]{}, 3543 (1993). M. Jamil Amir, A. H. Bokhari, and A. Qadir, [*J. Math. Phys.*]{} [**35**]{}, 3005 (1994). T. B. Farid, A. Qadir and M. Ziad [*J. Math. Phys.*]{} [**36**]{}, 5812 (1995) R. Bertolotti, G. Contreras, L. A. Núñez, U. Percoco and J. Carot [** **]{}[*J. Math. Phys.,*]{}[* *]{}[**37**]{},1086-1088 (1996). S. Weinberg, [*Gravitation and Cosmology*]{} (Wiley, New York 1972). [^1]: Email: dfsjcg0@ps.uib.es [^2]: Email:nunez@ciens.ula.ve [^3]: Email: upercoco@ciens.ula.ve
{ "pile_set_name": "ArXiv" }
############################## # GENERAL ############################## [general] es_index_pattern=logstash-eagleeye-* run_models=1 test_models=1 history_window_days=7 history_window_hours=12 es_save_results=1 print_outliers_to_console=0 ############################## # NOTIFIER ############################## [notifier] email_notifier=0 ############################## # METRICS PARAMETERS ############################## [metrics] metrics_batch_eval_size=100000 ############################## # ASSET FIELDS ############################## [assets] ############################## # DERIVED FIELDS ############################## [derivedfields] timestamp=%{YEAR:timestamp_year}-%{MONTHNUM:timestamp_month}-%{MONTHDAY:timestamp_day}[T ]%{HOUR:timestamp_hour}:?%{MINUTE:timestamp_minute}(?::?%{SECOND:timestamp_second})?%{ISO8601_TIMEZONE:timestamp_timezone}? ###################################################################################################################################################### # WHITELISTS ###################################################################################################################################################### [whitelist_literals] simple_literals_to_match_in_doc_with_outlier = whitelist_hostname [whitelist_regexps]
{ "pile_set_name": "Github" }
tag:blogger.com,1999:blog-827391368979707525.post3517895769272864467..comments2019-02-22T12:36:21.452+05:30Comments on Best Reviews: Shri Ravi Shankar Products List [Updated]vikramjit singhnoreply@blogger.comBlogger2125tag:blogger.com,1999:blog-827391368979707525.post-55219420877897142542019-01-23T20:36:58.810+05:302019-01-23T20:36:58.810+05:30thank you for your support thank you for your support<br />vikramjit singhhttps://www.blogger.com/profile/07245573210406492491noreply@blogger.comtag:blogger.com,1999:blog-827391368979707525.post-1850495287823020042019-01-23T11:49:14.007+05:302019-01-23T11:49:14.007+05:30Nowadays, people are more concern about their heal...Nowadays, people are more concern about their healthy life. For this, they prefer everything naturally including the basic needs. So, we can use <a href="https://www.dappakadai.com/marachekku-oil-chennai.php" rel="nofollow">Marachekku Oil in Chennai</a> for the strong and the healthy life.<br />yuva yaminihttps://www.blogger.com/profile/12067397504527031248noreply@blogger.com
{ "pile_set_name": "Pile-CC" }
Tabletop Photography Fundamentals Lesson 15 of 33 Shoot: Art Work Basics Tabletop Photography Fundamentals Lesson 15 of 33 Shoot: Art Work Basics Lesson Info Shoot: Art Work Basics What we're gonna do here is we're gonna set up a basic art photographing art little studio here we have little small pieces of art. I have them back here, we have to to start with there by tina jet one of this one's by tina jet so we'll put this one down here for now, this one's by tina jet, and we're goingto get it mounted up here. One of the things that's super important about shooting artwork is that it is level implement because when you are square, when you shoot this in a camera, if it is not square, when you go to crop into it because you're not going to shoot it to the edge, you going to shoot the whole frame and all around it when you go to crop into it, if it's like this or it's crooked like this it's impossible to fix, and then that kind of aberration is going to kind of change the way the art looks, and it doesn't really look that nice. So we built this little thing, which I asked them to make for me, which is sits completely square and got a little stud there that we can h... ang the artwork on and it's white, so it's not going to interfere, and then we have little level somewhere, don't we? Yeah, yeah, yeah super important tool in a photo studio not just for this kind of stuff but also your camera sometimes you want to make sure when you shooting an overhead uh so I put this on top and I make sure I got a perfectly level okay? So that's that then we take these little white cards and make these wings so I want to kind of have this kind of growing out in this direction from behind and then the lighting is pretty simple for shooting artwork you want assemble your lighting in a crisscross pattern so everything that we're doing here is is organized in a way so that the light is evenly distributed on all parts of the artwork so it's important to understand that balance is really important in this you want the art work to speak for itself? You don't want to enhance this in any way or detract from it the same thing with photo shop with this when I photograph artwork for a client which has happened in just very recently um I was so tempted to play with the color or the contrast and pump it up or whatever, but the idea is you can so you got to just keep it clean and light and let the work speak for itself because then what you're doing is you're putting your spin on somebody else's art and that's not really what you want to do with artwork uh this is twenty eight seventy please andrew yes when you are doing your post processing do you have your products with you in the studio like artwork like those products so you're able to view them on match them appropriately sometimes I mean sometimes it has to go back to where it came from like the artwork that I shot recently was a collection of cuban and south american art that we were sending to a museum in mexico city and they wanted to see it clean they just wanted so I got but I didn't get a chance to like be with the artwork because it was kind of expensive stuff so I had to photograph it and then go back to the studio and work so I just had to kind of trust that the lighting I had was was pretty clean thank you it would've been nice to have it there because I might not have let it go that's really nice these bulbs take a little bit to warm up to you which is something I didn't mention before the compact fluorescents will take a little bit time so you want to let them warm up to full power before you start to shoot because otherwise you'll notice that they'll change how we doing on meter okay so we're going to also want to shoot this at a hum a small aperture because you want to want to catch the detail so again, this whole set up is that's pretty much it I mean, obviously if you're shooting artwork that's the size of a room this is a whole different thing and you have to use different lighting as well, but the basic set up is the same trigger release that the basic set up is the same and this any study lighting that you use can be the voice from behind the artwork any lighting that you're going to use for this will be set up in the same configuration so if we're using we're shooting something much bigger we're going to use you have to use more powerful lighting so this this kind of our little d I y plus set up here we'll work really well for this, but once the artwork it's bigger and the whole setup gets bigger than the lighting has to get more powerful that's when we're moving into a chem eyes and for nails and the kind of things that can throw big light on the subject how does it change for that's behind glass it's important this set up is actually designed for that as well because the crisscross effect of the lighting will balance out the flares and that the white wings also balances out the flares because it doesn't necessarily have to be under glass it could just be oil paint an oil painting reflects light a little different than water color obviously that's flatter but this kind of technique is something I think it's fairly standard and how to set up artwork photographs I've photographed older artwork in both ways um both under glass high shine uh, oil painting and acrylic and also things like this that are matt and flat like the results of fairly similar I mean, there may be a little tweaking to go with a little bit of but this is why this is kind of done this way and you could see, like mathematically how the physics of it kind of work because you got this kind of thing and in light here is catching that one and light here is catching that one and it's pushing the light across in either direction to be level with the yes, the light should be level with the middle of the artwork, so its again a cz balance is possible and if you don't I'm aiming the light at that board, okay? So it's kind of sweeping across that way and I'm going to do the same thing with this we'll see how and you could you could see it, you could see how clean it is. I mean it's pretty clean we got a tiny bit of shadow going on on that side, which we can manage a little bit by we're not as worried about what's happening off the artwork because that's going to get cropped out we want to really worry about what's acting what's all the way to the edge of the frame these air about at full power now and you could tell because they a little bit sharper to your eyes would you try to eliminate those shadows? I could see yeah that's what I was just saying is I wouldn't worry about it as much because I could also shoot this against black and then that wouldn't be there, but it doesn't matter because when you're shooting artwork, you going crop in all the way to the edge that's why it has to be super square so that you can get a perfect crop on it how we doing way so it was in a frame you would still crop all the way to the outside of the frame you wouldn't leave any of the background uh probably depending on you know I could if I would I might use black in that in that way that especially if it's a really attractive frame because you want to have the contrast but this being not framed it's not as important I like shooting artwork against black a good portion of what I've shot has been shot against black felt, you know, silk or satin off whatever this velvet mark three yes, it has a level inside it inside the camera d is that or do you just use the hand you know I don't necessarily use the level in the camera I don't know I just never got used to it in my work flow so when they added it into you know this is like my fourth generation of digital cameras right try to do it but we can we could try it yeah this this well there it is it's pretty close pretty the iphones having my phone doesn't have a yeah it's actually pretty cool okay let's give it a shot not uh what I s so do you want to shoot at eight? Oh, no you shoot it we should have one hundred I think we should be at um eight I think it would be like that that's kind of a great question we have really talked about the last couple days is I s oh yes as low as possible for you I will always great because you're you're you know when you're talking about a digital file um the higher the number, the more grainy it could possibly get now off on old school and that I'm still not used to the fact that these cameras can handle like up to sixty, four hundred eso comfortably in certain circumstances I'm still going as low as possible all the time because that's the way I like it thank you see if the others working it's working wait coming up that's what chicken way go first shot we're gonna bracket that go a little higher higher a little lower but as you could see that's really nice and square and that could be cropped really comfortably and if I mean I'm not sure you would ever have to worry about that little shadow on the edge but like I said all we would have to do is switch out the black and that disappears I like I would actually like this if it was against black I think it would make the artwork stand out a little bit more um but for this purpose I wanted to just show the basic set up and the fact that the white wings are the thing that kind of helps push that kind of criss cross lighting across the artwork really nice and evenly so take another one so near it is that was bracketed a little bit lower so I think it's good to do a little under exposure it's good to do a little under exposure of digital photography particularly when there's a lot of whites and yellows and things because that's the thing that gets lost it and I know I said that yesterday but that's a pretty nice even clean image and we can kind of comfortably bring the exposure up a little bit when we're before we would deliver this uh to a client or how posted on our website but without changing the color profile and keeping that pretty true true to form. So any questions on this so far? Yes, we see a shot without the other wings without the wings sure makes that much of a difference in this scenario could we see those side by side when we, uh after we pull the wings out it's going to be really subtle but we'll see way aquino's flying in here so it would so the one on the right without the wings, the blues or a little bit actually a little bit deeper so that would what that tells me is that maybe even we were probably even a little tiny bit over exposed with the wings so that would mean right so that's it from the way we're looking at it. This is actually calibrated for the web so it's not the same but I think ultimately what's there this method is more consistent because sometimes you're gonna have stuff that much darker so let's try we have another piece here that's a little bit darker when we could try that one too this one this one is by ursula markgraf we have one more question if that's okay this's directly from the artist herself so this is the scene a jet okay, so when she sent in her her problems her biggest problem with product photography seems to be brightness not enough and white balance white balance seems to get tricky for me when objects have different colors that change the balance from item two item and shot two shot I have a basic understanding but can never seem to get it right as I'm shooting the piece or even in post processing okay, since before you moved on since this's directly from tina well, I would then encourage her first off to learn how to do a custom white balance on her camera because if it is jumping like that from product product, she might want to do individual white balance correction where she might be in order weight balance and write the items going to change the way balance that's right? So that's probably how she shooting she's probably shooting an auto the way john just said, so I think it's is that what you just turn off the white balance balance too? The lights that you're using right? Because we're going from this light blue to a dark blue that's probably going to make it more yellow to the next one if we're in automatic. If we had a red piece of art here it is camera's going to compensate to make it to blue and it sounds like that's what's happening to her right each item is affecting the color, but if you said a specific color white balance in the camera, then it's not going to change when you subject you ever use a white car great card to make sure that you keep proper color balance? Yeah, absolutely you could shoot a white card or I would you should use a great card white card would be more video, but I would shoot a great card to get that white balance correction I mean, I'm comfortable knowing that when I shoot my a w b, I can correct it in post because I'm never that far off, but sometimes people don't feel comfortable in postproduction changing those kind of settings and particularly in artwork you might be better served what john just said is to do it individually for each piece so that you would not have to worry about altering the colors of the off the artwork itself. But if we know this fluorescent is forty eight hundred but we never said it to a specific held in forty eight hundred and then you change the artwork it's not going to affect the color balance if it's an auto red or yellow piece is going to make the image bluer blue piece is going to make the image more yellow because the cameras trying to come to a standard gray tone and yeah, we know and we know these lights right now are about forty five yeah, so we're in a consistent light balance so that even if we had to use a slider a little bit in post production it's going to be consistent throughout each image? Okay, so this one here, like I said, we'll move onto a different one by marcelo markgraf and it says reach for the stars this is interesting and we're going to do something different with this formal photograph it fully, but there is a very tiny detail in here there's a lace pattern in the dress and there's print like page from a book that has been deca pa ged or kind of superimposed underneath and painted over so it's a really nice little detail, plus there's texture on relief on this because it kind of reach is off. They're these little stars, so they're multiple textures on here that would also need to be photographed. So this is a good one to kind of work with so let's get the level back here. Can you have the hundred millimeter ready? Yeah, thanks. And we got this nice and level and we can also double check with what we did before with the camera. It's also much better to be hanging artwork off a wire in the back rather than just this one stud, because that pivot point kind of gets it a little bit harder to keep it level the's are these we didn't want to kind of screw into the frames or anything but that's some I would suggest that which was thea john what's that which was the one that you used for the level that level thing but it's the info button on here two times other is okay so as you could see were pretty well squared off right and change camera settings on this yet and the way it looks in camera I'm really comfortable with that exposure the side panels those wings uh you necessarily mean maybe it depends on what if I really want to kind of push a little bit more in that direction but I think that once you're set up it should be okay because you could see that the art itself is well balanced throughout so we want to have that nice clean light across the whole thing I think I can and I like what we have blues are nice and blue and we can't really see that detail yet. So this is where this is important and photographing artwork for two reasons one sometimes we have things like this second thing is what like what my what I did recently with that uh cuban artwork was the signatures are really important, so being able to get really close and taking a really detailed image off the photo of the signature is important for anybody who might be appraising artwork, so I also people who buy artwork I want to know that it's signed it's signed artwork ultimately is more valuable down the road so if you're selling your artwork and it's the signature isn't, um a parent sometimes maybe on the backside you would might want a photograph sometimes people put index cards or some other kind of ah evidence of provenance when it comes to artwork on the back so you want to be um you want to be involved in being able to capture that so that somebody could look at it appropriately yeah, we definitely gotta get closer is where we risk that lens flare problem when we're getting out pretty much in front of the lighting in a little bit we might want to kind of maybe even bring the lighting up and over a little bit because now we're going to focus on small details using a macro lens one hundred millimeter macro lens and we're going to get as close as possible to the detail in her dress in the figures dress and we don't we can also you know, as long as we get three quarters of the frame filled with the detail, we can zoom in a little bit I mean, I know I said yesterday I would prefer not to crop because we lose a little resolution but something this small it's going to be okay so that's important, you could see that texture in that detail underneath the dress in here see that's that's an important kind of concept in photographing artwork like this this is a really, really small piece and using a macro lens to get really close and and show the details important on the other piece of detail that's in this is that those most stars um and I might want just get up a little closer and be ableto take a photo that shows both of those elements so we have kind of three detail elements that are important and you think you can kind of see how those come off one of the things you might want to try with this is to kind of swing to an angle to see how it might be coming up off the coming off the page and I'm just doing this to me from my eyes, not for any get on the set, right? So you can kind of see that we got that sparkle and you've got to see that we have that relief coming off so it's important to kind of capture those details, I think that's probably a little bit I mean, the exposure is still pretty good, but I would bracket one more given another third just to have the opportunity you have it a little underexposed, so piece of artwork would you use the light meter across it to make sure that you're getting flat lighting the light meter? Would you meet her like each end of a large piece? Oh yeah, absolutely yeah, because that's a much, much tougher, more difficult thing to do with you because you might need multiple light sources more than this you might need to bank to bank lights on either side crisscross think that might be important to do because you know the bigger thing anything get c and I think that conceptually it's all the same whether you're lighting something this big in a light tent or you're lighting an entire room your concepts of the same it's just that it gets more and more complicated when you win involved more and more size space light sources everything gets a little bit more complicated but try to remember that it obviously is this it's the same general concept so that's why studios are painted white just like that big white tent right? Because like bing bing bing bing bing all over the place so all right, so let's talk about some questions and we have a few minutes to go here and we can answer a few questions about what we just did turn these lights off a bit so just start off with as you change and sort of going on what bob was saying but from a lighting standpoint as you start shooting bigger and bigger and bigger work how much will you add more lights or you just bring your lights back? Depends on power I mean, if I have more powerful enough lights and that can use a single light source to do it. I would, but then it becomes something like almost like a like when you photograph cars, right? Where? Just, you know, you talk about getting back, adding more and more lights in the same vein so you might want to stack your lights. You know where I have a highlight and a low light and a highlight and a low light for a really big piece of artwork. So in the same general configuration, except we have more lighting, throwing more power in, um, or even imbalanced way. So if this was, you know, is biggest. This panel in the wall here, you know, like this here. If I was photographing that was a piece of artwork and was hanging on a wall, I might use four lights for that, too, on the top two on the bottom. And I would kind of angle them in a way so that everything is kind of criss crossing like that again, using your wings, keeping, you know, whatever your surfaces and nice and square all the same principles. But definitely I would try to use more lighting if I if I had a bigger piece on more power. Relating obviously so we're just kind of a follow up question on that from patricia walker does the color of the wings matter so if you were to shoot the pictures on black would you still use wait wings to assist with the light bouncing in the crispy because the wings air just for reflecting light okay, yeah, absolutely we have some more oh yeah really, really great questions from scott crumb what if the artwork is framed and has glass on it and you cannot remove it from the frame what's your solution that I think the solution is the same that's what this is designed for its designed to reduce the flares so I photographed it both ways so I think that this this general concept of the way the lighting is crisscrossing he's helping to balance out those things like flares and softening your light if you have tio we're using these kind of bare fluorescence we're getting good results but the reality is you might want to soften the light too and use umbrellas the way we can show you how that might look waken put these umbrellas on if these were more powerful lights and we wanted teo diffuse it a little bit more then we were I would do this reverse the lighting and put the umbrella is on and this again would lessen your incidents of reflecting and shadows oh you gotta you gotta spin it okay? Okay hold on, andrew while you're doing that we can you reiterate the polarizer question from yesterday because we have some more questions about that as far as you using one for situations like this I don't normally use polarising polarizing filters indoors when I'm controlling light this way, the only times I use them is when I'm shooting video outdoors and really, really high son bright, bright, bright sunshine because it's so hard to manage shadows and it's also really hard to manage color in no scenario so that's kind of the only times I used them I don't necessarily use them indoors when I'm shooting studio work we could take a shot like this too, with these and see, but these aren't really super powerful to be to be umbrella ring this way could we try it with that framed glass picture? Absolutely do something job way might have the meter differently here considerably different liang challenge accepted get the level from that way a few minutes see if you can get this going do you want do without the umbrellas just so we'll be bright enough let's try it with it we'll just pushed up s o okay, I'm not getting in flares here looks good way get this umbrella in place so I'm going to try to get square is possible here hoops um okay, um we're gonna have to increase our s o considerably to accommodate this I'll start at four hundred and see if that helps. Okay, I'm square though but I want to just back up I want this this is just for the general concept of what we're trying to accomplish here and okay did we see it? Is the feather working did you come up now? It didn't collect it there it is sea glass no flares whoa! Way! So there's the answer to the question how much do we pay her to ask for that way made that simple adjustment use the umbrella soften the light a little bit the same general set up on a highly reflective piece of glass and it worked perfectly so again the physics of this is what makes it work right? So did the umbrella is also soften the shadows noticed. Yeah, probably yeah, I think those bare bulbs are you know, we're trying to get maximum power shooting one hundred so but the reality is this is probably a better setup if we had a little bit more powerful using h m eyes in this situation most definitely be using umbrellas or soft boxes soften that light and then the shadows disappear so something on the wall and you couldn't put the wings there is this the setup you would use? I still would use that because I've done it on next yeah, you can you know you could use just just we could show you watch I mean, we have the lights this is, um I got it right here if this is on the wall and you can kind of do this you know, just do that and then you have the same situation so you know, it's um this is probably the most easily reproducible set up that we're going to do for anything in the three days that we're together because this is like kind of that tried and true method that it just works so and when you start shooting three dimensional artwork that's where you have to start to play with shadows and that's where we start talking about still like photography like we were doing yesterday on dh that's where that light ten might come back into play if you're shooting sculpture to create that wrap around like to make sure that you're getting every aspect of it that's important it's you could use the same setup, but you might have to fill and you wouldn't be flat against the wall it would be sitting on a pedestal of sitting on a light table so absolutely our last question is from d m s photo hey andreu let's say the artist wanted to use the images for reproductions specifically the ursula piece that had the textures and embellishments what would you do I don't know. I'm not quite sure. I think the question is like, they wantto reprint the picture over and over again. Oh, I see. What would I do differently? I don't think I would do anything differently, because I think that, you know, however you could, I think you would have to photograph the details really carefully, like we did on dh show the, you know, the the height of which maybe I would shoot it completely sideways to show how far out that that little star comes off. But I think it was all would also take if artists was trying to do reproductions on that. I think her specs on it or his specs on it would be important to give to the manufacturer, too, and we could photograph it and show it. But I think scale and size or something that would need to be determined by the artist as well. Class Description You don’t need a studio to take professional-grade product and still life photographs! All you need is a simple tabletop lighting setup. In this course, award-winning food photographer Andrew Scrivani will show you how to create and tailor your own table top lighting setup — on any budget. Whether you’re a beginning photographer looking to master lighting or a professional photographer eager to expand your services, this course will give you a candid, comprehensive playbook for tabletop lighting. Tabletop photography transforms a single surface into a small-scale studio. Andrew, a regular contributor to The New York Times, will show you how to create and then optimize your lighting setup for your needs — using everything from the latest gear to household items. Andrew will cover metering and bounce cards, working with strobes and soft boxes, LED lighting, and tips for shooting glassware and other tricky products. By the end of this course, you will know how to set up and adjust your very own tabletop studio — and how to use that small-scale studio to expand your services, improve your photography, and market your business. a Creativelive Student I was pleased to see real life situations and set ups, their work arounds and the little fiddly things all commercial/product photographers go through to produce a viable shot. Unlike some of the other reviews, the "oops, it didn't work, let's try this instead" was totally real world and believable. So many times on other teaching venues, the shot is already set up and perfected before the instruction begins. It was extremely helpful to watch the processes that were involved in producing the correct captures. I was impressed with the humor and teaching style as well, especially for the time constraints in a classroom setting. The student set-ups and critiques were valuable and spot on without being negative in any way. All-in-all this was one of the best classes I've viewed at Creative Live. I just wish I could have had three more days and to have been there in person for the one-on-one instruction. Ernst Thank you Andrew. Great class. Learned a lot. Great instructor. Only wish there were more segments using flash rather than the very expensive gear. But, the principles are the same. Aly Cupcakezz I really liked how things were experimented. Instead of just giving do x, y, z. It shows you how to correct issues as they come up, and how to enhance your photography This gives you a guided idea of all the things you can play with to perfect your product photography image. You really learn how to fix the image problems as they appear in front of you. A very realistic way to create your own personal lighting setup for your product photos for your own studio space. Excellent fundamentals class for new photographers or small businesses attempting to do their own product photography. Thank you!
{ "pile_set_name": "Pile-CC" }
Q: non-Archimedean Valued field extension of $\mathbb{R}$ Let $K$ be a field with non-Archimedean valuation $|\cdot|$. Suppose that $\mathbb{R}\subset K$. Question 1: Is the restriction of $|\cdot|$ to $\mathbb{R}$ the trivial valuation? I guess that the answer is yes, but I don't see an evident answer. When I try to organize my ideas I get other questions: Definition: Two valuations $|\cdot|_1$ and $|\cdot|_2$ are dependent if there exists $\lambda>0$ such that $|\cdot|_1=|\cdot|_2^{\lambda}$. According Ostrowski's theorem, if the restriction of $|\cdot|$ to $\mathbb{Q}$ is not trivial, then it is dependent on the p-adic valuation. Question 2: How to prove that $|\cdot|\neq|\cdot|_p$ on $\mathbb{Q}$? Question 3: If $|\cdot|$ is trivial on $\mathbb{Q}$, how to prove that $|\cdot|$ is trivial on $\mathbb{R}$? A: Question 1 is easily answered as 'no' by considering $K = \Bbb{R}$, and let $|\cdot|$ be an extension of any $|\cdot|_p$ on $\Bbb{Q}$ into a valuation on $\Bbb{R}$ (existence of such an extension can be established using the Axiom of Choice). Since the restriction of $|\cdot|$ to $\Bbb{Q}$ is $|\cdot|_p$, it is non-trivial. By the same reasoning (i.e. the example above), it must be impossible to provide the proof sought for in Question 2. Indeed, it is clear that the restriction to $\Bbb{Q}$ of an extension of $|\cdot|_p$ on $\Bbb{Q}$ is $|\cdot|_p$ itself.
{ "pile_set_name": "StackExchange" }
Systems and methods developed to characterize digital data (or byte) streams are known. Such systems and methods are often used to detect computer viruses and worms and the like. More specifically, intrusion detection and antivirus systems typically use “signatures” to detect specific patterns or characters or digital bytes. Hashes, checksums and other numeric calculations are frequently used to characterize digital files and bytes streams, including legitimate software files and malware. These techniques are used to identify items that are identical to the source of the signature. Generally speaking, they are not intended or even capable of detecting similar, but non-identical, items. There is, however, a known approach, as described in Todd Heberlein, Worm Detection and Prevention: Concept, Approach, and Experience, 14 Aug. 2002, NetSquared, Inc. (2002, unpublished) (“Heberlein”), that is capable of detecting similarity among selected sets of data. As explained by Heberlein, it is possible to characterize a selected portion of data using a “thumbprint.” In this case, the thumbprint is represented by the result of a hash function applied to the selected portion of data. FIG. 6 shows the basic approach according to Heberlein. The original content, “The quick brown fox jumped over the lazy dog.” is sent through a hash function that generates a number. The original content consisted of 360 bits (8 bits per character times 45 characters) and the result is a single 32-bit number (a typical unsigned integer on most computers). This number can serve as type of compact representation (i.e., the “thumbprint”) of the original content. For example, suppose a document is processed by this technique. A hash number is computed for each sentence in the document, and then the computed hash numbers are stored together in a hash table. Later, if a user provides a sample sentence and asks if that sentence is in the document, the following algorithm can be used to very quickly determine the answer. First, the hash value of the sample sentence is computed. Second, the hash table is queried to see if that number exists in the table. If it is not in the table, then the sample sentence is not in the document. Third, if there is a match, then the sentence (or sentences) in the original document that created the hash value is examined and it is determined if it, indeed, matches the sample sentence. As further explained by Heberlein, traditional hash functions do not work well in certain scenarios. Specifically, most hash functions are designed to produce a completely different hash number even if the content only varies by a single byte. For example, referring again to FIG. 6, if the original sentence is only slightly modified by changing the word “dog” to “dogs,” then a completely different hash number may be generated. In fact, using traditional hashing functions, a review of the resulting numbers for each string would not indicate that the two sentences were very similar at all. Heberlein goes on to explain that in order to diminish gross discrepancies between seemingly similar collections of data, it is possible to employ a multivariate statistical analysis technique called principal component analysis (CPA) to the selected data, and, as a result, the gross discrepancies can be significantly diminished. Despite the advances described by Heberlein, there remains a desire to provide improved systems and methods for detecting computer viruses, worms, other computer attacks and/or any other data that may repeatedly pass over a network.
{ "pile_set_name": "USPTO Backgrounds" }
<?php /* * This file is part of the symfony package. * (c) 2004-2006 Fabien Potencier <fabien.potencier@symfony-project.com> * (c) 2004-2006 Sean Kerr <sean@code-box.org> * * For the full copyright and license information, please view the LICENSE * file that was distributed with this source code. */ /** * sfSecurityUser interface provides advanced security manipulation methods. * * @package symfony * @subpackage user * @author Fabien Potencier <fabien.potencier@symfony-project.com> * @author Sean Kerr <sean@code-box.org> * @version SVN: $Id: sfSecurityUser.class.php 23810 2009-11-12 11:07:44Z Kris.Wallsmith $ */ interface sfSecurityUser { /** * Add a credential to this user. * * @param mixed $credential Credential data. */ public function addCredential($credential); /** * Clear all credentials associated with this user. */ public function clearCredentials(); /** * Indicates whether or not this user has a credential. * * @param mixed $credential Credential data. * * @return bool true, if this user has the credential, otherwise false. */ public function hasCredential($credential); /** * Indicates whether or not this user is authenticated. * * @return bool true, if this user is authenticated, otherwise false. */ public function isAuthenticated(); /** * Remove a credential from this user. * * @param mixed $credential Credential data. */ public function removeCredential($credential); /** * Set the authenticated status of this user. * * @param bool $authenticated A flag indicating the authenticated status of this user. */ public function setAuthenticated($authenticated); }
{ "pile_set_name": "Github" }
This invention relates to an air seal on the hub of a large axial flow fan. The air seal covers the annulus between the hub and inner ends of fan blades. Large industrial axial flow fans having diameters ranging from about one to ten meters or more are commonly used for moving air through cooling towers, heat exchangers and the like. A typical fan in such an application may have a diameter of about five meters and anywhere from eight to eighteen airfoil-shaped blades coupled to a rotatable hub. An exemplary mounting arrangement for the blades on large fans has a hub which fits on the drive shaft and a number of radially extending hub struts to which the blades are somewhat flexibly connected. The connection permits the blades to have limited motion in the axial direction, adjustment for pitch, and adjustment for radial length. The latter is important since the gap between the tip of the blades and the surrounding shroud should be small so that air “leakage” between the tips and shroud is relatively small. Air that may flow from the higher pressure downstream face of the fan to the lower pressure upstream face represents a loss of efficiency. A gap is, of course, important so that the ends of the blades do not collide with the shroud. Radial adjustment of the effective length of the blades allows the installer to have a small and uniform gap. Air “leakage” at the inner ends of the blades should also be limited to promote fan efficiency. For smaller fans and those with fixed blades, a circular sheet of metal overlying the hub and covering any annulus between the hub and inner ends of the blades can form an effective air seal. For larger fans, and particularly for those with adjustable blades, a polygonal air seal closer to the inner ends of the blades is desirable. Furthermore, in addition to a flat sheet spanning the annulus, it may be desirable to have some axial extent of the air seal to minimize leakage around the downstream portions of the inner ends of the blades. In effect, the air seal is a shroud at the inner ends of the blades, that rotates with the blades. When the diameter of the air seal at the hub of the fan becomes large, there can be problems in forming the air seal from a simple circular or polygonal sheet of metal. A structure for making increasingly large air seals is therefore desirable.
{ "pile_set_name": "USPTO Backgrounds" }
Intranasal treatment of vitamin B12 deficiency in children. Vitamin B12 deficiency is traditionally treated with intramuscular injections of cobalamin, which are stressful events for children. In adults, studies have shown adequate absorption of intranasally administered vitamin B12. To date, data concerning efficacy of intranasal administration of vitamin B12 in children are lacking. We report on ten cases of children with vitamin B12 deficiency who were successfully treated with intranasal administration of a spray containing hydroxocobalamin. The mean baseline vitamin B12 concentration increased from 126.3 pmol/l (SD 55.4) to 1914.7 pmol/l (SD 1509.7). No side effects were reported.Conclusion: In children, intranasal application of vitamin B12 seems a safe and effective alternative to intramuscular injections, leading to higher compliance and less burden to patients.What is Known:• Children with vitamin B12deficiency are traditionally treated with intramuscular cobalamin injections, which are costly and painful.• Studies in adults showed that intranasal application of hydroxocobalamin leads to normalisation of vitamin B12levels.What is New:• The intranasal application of vitamin B12resulted in a substantial increase of the mean baseline vitamin B12levels without any side effect.• These data encourage a systematic evaluation of intranasal treatment of vitamin B12deficiency in order to define safety, optimal dosage and administration frequency.
{ "pile_set_name": "PubMed Abstracts" }
§5-401 VIOLATION;PENALTY. Any person who shall violate or refuse to comply with the enforcement of any of the provisions of this Chapter, set forth at full length herein or incorporated by reference shall be deemed guilty of an offense and upon conviction thereof, shall be fined not more than one hundred dollars ($100.00) for each offense. A new violation shall be deemed to have been committed every twenty-four (24) hours of such failure to comply.
{ "pile_set_name": "Pile-CC" }
Saturday, November 17, 2007 There's more video of the RCMP Taser Killing of Robert Dziekanski, still in the hands of the investigators (RCMP, I suppose), which has not gone public. [Larry] Berg [, president and CEO of the Vancouver Airport Authority,] said 14 security cameras monitor the area, and the footage from those cameras has been turned over to investigators. The RCMP has probably been studying these tapes like crazy, to see how they can spin their own story in order to get away with manslaughter, but this time it might be harder to so than ever. The release of Paul Pritchard's video has given the whole world a good view on what went wrong, and most people agree that actions of the four policemen and their superiors were sub-standard (to say the least). Vancouver Police recently released original surveillance footage of a 2005 Hells Angels incident downtown Vancouver. Expect some delay but demand from authorities that the surveillance footage of the RCMP Taser Killing WILL be shown too. Fair is fair. 4 comments: While I agree with the comment you left on my blog that serious neglect was shown on the part of the RCMP in their use of tasers. My discussion was not trying to shift the blame from the police's mistakes. My point was that the Vancouver airport staff's negligence/incompetence created an unacceptable situation. Whether the story ended in his death, a stint in the disturbed ward of the hospital, or a reunion with his mother, the airport's services neglect drove a man to become erratic. Had he not died, this story never would have even made it to the news, yet this type of neglect for people we should be welcoming into our country is too commonplace. I agree that this killing could have been prevented by the staff: there's obviously a "problem" at the Vancouver airport that needs to be addressed because it wouldn't have been necessary for security to call on police. We know now that calling on police is not without danger. The RCMP's only way to solve such a situation seems to be to taser someone into compliance first, ask questions later. The next time I see a similar situation, the RCMP is the last one I will call. It's sad to see we cannot expect a more humane and compassionate assistance from our national police force.The taser seems to have replaced good policing. The Globe and Mail by Norman SPECTOR about the Vancouver Airport incident which you might find interesting. Norman Spector has the ability to keep an open mind and change his position upon learning more of the facts....do you? ********F) Shame on me for jumping to conclusionsTHE DEATH OF ROBERT DZIEKANSKINORMAN SPECTORDecember 3, 2007Two days before Paul Pritchard's recording of the tragic end of Robert Dziekanski's life wasreleased to the public, I spent the evening with Mr. Pritchard and a CBC crew that was preparinga report for The National. My role was to view the 10-minute recording and, as the CBC camerarolled, to comment on what I was seeing. Then, reporter Darrow McIntyre - who was seatedbeside me throughout - asked questions for about half an hour, before he and his colleagueshastily packed up and made a dash for the last ferry to Vancouver.Late Wednesday afternoon, shortly before the segment was to hit the airwaves in the Maritimes,the CBC producer called to inform me that time constraints had required them to delete my threeminutesegment of their report. I was not upset, though I didn't relish having to explain mydisappearance to friends and acquaintances who had seen me in the program promo the eveningbefore. In retrospect, however, this is one time that I'm actually thankful to have ended up on thecutting-room floor.- 11 -Viewing the recording that evening, my reaction was pretty much the same as that of mostCanadians who have seen it subsequently. Mr. Dziekanski did not appear aggressive or to be athreat to any of the bystanders. Rather, he seemed to be cowering in fear, and, as I alsoobserved on camera, Canadians don't treat animals the way he was treated. We're a wealthycountry, and the destruction of a computer, while regrettable, is certainly not worth a man's life.Over all, then, I was left with a mixture of sadness and disgust at what had transpired at theVancouver airport. As the crew was departing, the CBC producer and I agreed that the RCMP'sadmonition to place the recording in context sounded silly, as it had not been edited and therecould only be one interpretation of what I had just seen.I now believe I should have been more cautious in my evaluation.Mr. Pritchard, after some heroic efforts on his part, had only that day recovered a DVD of hisrecording from the RCMP, and it would not play on my home unit. It did, however, play on mylaptop computer, which is how I learned that he had not used a video camera to record the eventsat Vancouver International Airport. The limited memory of his still camera explains why wesee only about 10 minutes of what Mr. Pritchard had witnessed over six hours.With this knowledge, I should have picked up on Mr. Pritchard's comment that the situationhad been a lot scarier as it was unfolding. And, I might also have summoned the courage toask why he had not tried to help, as one woman had - a question that even professionaljournalists are often asked by the public in similar circumstances. Instead, in response to severalversions of Mr. McIntyre's question regarding RCMP actions, I kept repeating that the four goodsizedofficers must not have wanted to get their hair messed or their noses scratched - the onlyexplanation I could think of for their decision to use the taser in the circumstances shown on therecording.It was two weeks later that I learned from a caller to the Bill Good Show that Mr. Dziekanski was6 feet 9 inches tall, which had not been widely reported. In these circumstances, I would askmyself what the airport security people told the RCMP when they called for assistance, and whatwas in the minds of the officers who sped to the airport to deal with the situation.It must not be an easy time to be a Mountie these days; several RCMP officers have fallen in theline of duty recently, while the organization itself has been going through a bad patch and itsreputation has deservedly taken a beating over the past several years.Still, RCMP officers are entitled to the same legal protections that we afford in British Columbiato, say, a man like Robert Pickton. Frankly, I'm ashamed of myself for having rushed to judgmentof the officers involved in the death of Mr. Dziekanski, before the requisite investigations hearfrom them and from all the bystanders who witnessed the full events that evening along with Mr.Pritchard. Norman Spector did write his initial story with an "open mind". Unfortunately he fell back into his traditional neo-con law and order mode; and that conservative mode is as far from an open-mind mode as one can get.
{ "pile_set_name": "Pile-CC" }
Spinforth’s Weekly SoundCloud Scour 73 Easy all, and welcome to..a much much less epic Scour than of late, due to a couple of temporary technical issues with my Scour Power..Spinforth’s Weekly(ish) SoundCloud Scour #73. I say technical issues..strictly speaking falling asleep on the job for several hours yesterday evening is probably not officially ‘technical’. Just been having one of those weeks to be fair..nothing quite gone according to plan..so, the sooner it’s my Friday the better, and the sooner I get to sleep tonight the sooner my Friday come! What’s going on Friday? I hear the few of you who bother to read this bit ask. Ahh..glad you asked..well, it’s Hong Kong Ping Pong’s last Watermans residency gig of the year..which we dub our Hong Kong Ping Pong Christmas Party! I can pretty much promise it won’t actually be even the slightest bit Christmassy for you, we however will be doing our best to make it extraspecially full of Tuaca induced party! No guests this month, just us 3 of HKPP getting pissed whilst spinning a rad tune or two for you. If you’re out n about in Fal Town tomorrow night..come deliver us another cracker pleeeeease! Ears warmed up? Let’s bring some quality hip hop. Second Scour on the trot for Greece’s Funkanizer. He’s hit his 1000th SoundCloud follower since last Scour, and to celebrate he’s opened up this tasty treat for us all. Big UP Mike..1000 followers well deserved! Here’s to a speedier 2000..there’s no doubt you and your work fully deserve to be followed en masse bro! ALTERNATIVE DOWNLOAD AVAILABLE IN TRACK DESCRIPTION.95bpm ▼ Surprising to find this latest gem from MR Fresh with plenty of room for loads more plays, comments, favourites and downloads. It’s been up for 10 days already, those downloads should be approaching 1000 by now! To be fair even this one caught me by surprise, I know of another one that will hopefully be arriving exclusively via Scour soon..a pleasant surprise tho as always no doubt. 97bpm ▼ SCOUR #73 EXCLUSIVE #1: Talking of exclusives..cue DJ Roast Beatz with the first (of two) of this week’s two! Top top quality remix of Tha Liks’ ‘Mary Jane’ right here..go show it some word spreading comment love please! And then grab your >>SCOUR EXCLUSIVE DOWNLOAD LINK RIGHT HERE<<100bpm ▼ I’ve had a few private words, and have been left assured that the alternative download link for this monster from Minoru is arriving very soon! God I hope that’s true..I don’t even have a copy for myself yet, and this has ‘play me at Hong Kong Ping Pong Christmas party tomorrow night’ written all over it!! ALTERNATIVE DOWNLOAD NOW AVAILABLE IN TRACK DESCRIPTION.100bpm ▼ SCOUR #73 EXCLUSIVE #2: Been road testing this next re-rub from Canada’s up-and-coming B-Mid for a little while. It’s a beaut..and so is he for deciding to open it up exclusively for The Scour! My advice is to get involved with this Cloud folks..that’s 3 solid tracks Mr Mid has delivered to our ears now..who know’s what he got brewing next!?? Yeah, no worries..I’ll try and find out! 100bpm ▼ Hang about..is this really a Welcome to The Scour to, currently living in the UK originally from Adelaide Australia, DJ Dylan Sanders!? Apparently somehow so! Nice tune bud, excuse the pre-scour #73 neglect..Beastie Boys edits/remixes/re-rubs often bore my ears, definitely not the case with this one though! 105bpm ▼ And here’s another freshly Scoured head! This one of the much lesser known variety. Introducing Lithuania’s Stereobeaver (..great name!) to your ears n crates. Welcome dude..this is good, what else you got!? ALTERNATIVE DOWNLOAD AVAILABLE IN TRACK DESCRIPTION.110bpm ▼ Quality party break business from Poland’s DJ Black Belt Greg next. Yet another ‘Welcome to The Scour’er’er’ I believe! ALTERNATIVE DOWNLOAD NOW AVAILABLE IN TRACK DESCRIPTION112bpm ▼ Norway’s Blowshitup is doing exactly that for his second Scour in a row! Deserves some word spreading comment love please. ALTERNATIVE DOWNLOAD lINK VIA BUY THIS TRACK BUTTON.112bpm ▼ He made it back home! Bit of a shame..we were hoping to Bobby-nap him! On the plus side..he’s back churning out his laser funkin free fiiire! Was an absolute pleasure to meet and party with you bud..you’d better be heading back our way next year please. Until then..get back to work! DOWNLOAD BUTTON AVAILABLE VIA SOUNDCLOUD AND/OR >>HERE<<.112bpm ▼ Funkfreak‘s original blend of this supported via Scour #70 was already pretty dope. This version’s gotta be on a par if not even better! Big UP dude..I think a third version maybe pushing it though..time to move onward and upward! ALTERNATIVE DOWNLOAD AVAILABLE VIA BUY THIS TRACK BUTTON.113bpm ▼ Last up, pre-curveball..gives me great great pleasure to welcome back to The Scour the incredible Nynfus Corporation! Just one Scour short of 50 since I first Scoured them waaaay back in Scour #24 times. Been way too long dudes, loving the ‘Nynfus vibe’ running thorough this one..Welcome ‘home’! Look forward to welcoming you over to a forthcoming Scour Records release real soon too please! DOWNLOAD BUTTON AVAILABLE VIA SOUNDCLOUD AND/OR >>HERE<<.116bpm ▼ Aaaaand finally..this week’s curveball! This one going out to all the American’s in the world with some common sense! The rest of the other world thanks you. Here’s Chuck Wild’s (aka Captain Planet) fitting tribute to the re-elected Funky President. 109bpm ▼
{ "pile_set_name": "Pile-CC" }
Q: How to commit text, annotations etc. to/from groovy-script from/to other GATE-plugin? I want to create GATE-pipeline like this: ... -> Plugin no.1 -> Groovy-Script -> Plugin no.2 -> ... As a GATE beginner, I don't know how I can receive the document-text and its annotations from plugin no.1 to read it into my groovy-script. Then I want to edit the given document-text and/or set some more annotations with my groovy script - how can I commit this to the next plugin in the pipeline? Edit: OK, now I see the question above isn't my problem. My script starts like this: public class MainApp { public static void main(String[] args) throws IOException { Gate.init(); System.out.println(doc.getContent()); } } But when I try to load the script into GATE, I get the "Script compilation failed"-error. I don't get it, because this script public class MainApp { public static void main(String[] args) throws IOException { System.out.println("hello"); } } and this script Gate.init(); System.out.println(doc.getContent()); both work. (I didn't tested the last one until now, that's why I thought I do a wrong call) A: As explained in the Groovy Script PR documentation, there are a number of pre-defined variables available within a script that is run by the script PR: doc is the Document currently being processed inputAS is the AnnotationSet from that document corresponding to the inputASName runtime parameter outputAS is the AnnotationSet from that document corresponding to the outputASName runtime parameter You can read the document content via doc.getContent() and modify it using doc.edit, read annotations from previous PRs from the inputAS and create annotations for subsequent PRs in the outputAS. Edit: I think you're mis-understanding what the Script PR expects - you should not add a class body, just a script, i.e. the script file should contain just the code that would be inside a method body without the surrounding class and method declarations. And you should definitely not call Gate.init() in the script - your script will be called by GATE, once per document. The single line: println doc.getContent() on its own would be a valid script for the PR, and would display the text content of each document in the Messages pane.
{ "pile_set_name": "StackExchange" }
Have you participated in a mentorship program before? If so, tell us about it. SHIFT + ENTER to make a line break OK press ENTER Tell us a little about yourself: Don’t forget to mention the academic and life experiences that have helped you develop important skills and contributed to your success.We are also interested in hearing about your hobbies and interests as this will help us with the matching process. SHIFT + ENTER to make a line break OK press ENTER What are some of the things that you can see a Mentor helping you with? Examples: choosing the right programs at school, apartment hunting, having difficult conversations etc. SHIFT + ENTER to make a line break OK press ENTER Please tell us about the specific skills and experiences would like to share with youth who are preparing to transition to independence. SHIFT + ENTER to make a line break OK press ENTER Tell us about your academic journey and goals SHIFT + ENTER to make a line break OK press ENTER Tell us about your career goals SHIFT + ENTER to make a line break OK press ENTER What do you foresee as potential challenges of being in a mentoring relationship? SHIFT + ENTER to make a line break OK press ENTER What motivates you to become a mentor? SHIFT + ENTER to make a line break OK press ENTER *Are you willing to meet our program requirement of:1. Meeting in person at least once a month2. Connecting virtually (phone, text, email etc.) about once a week * Y Yes N No Please tell us about your previous volunteer experience. Don’t forget to tell us also if you have experience volunteering with young people. SHIFT + ENTER to make a line break OK press ENTER Tell us about aspects of your personality Choose as many as you like A Quiet B Outgoing C Confident D Shy E Talkative F Optimistic G Nurturing / Supportive H Friendly I Adventurous J Sensitive K Inquisitive L Creative M Inspirational N Energized O Other OK press ENTER What is your Social Worker's name and contact information? SHIFT + ENTER to make a line break OK press ENTER Is there anything else you would like to share with us that will help provide the best possible match?
{ "pile_set_name": "Pile-CC" }
--- abstract: 'Taking the collisionless damping of geodesic acoustic mode (GAM) as an example, the physics processes underlying wave particle resonances in the short wavelength limit are clarified. As illustrative application, GAM excitation by energetic particles in short wavelength limit is investigated assuming a single pitch angle slowing-down fast ion equilibrium distribution function. Conditions for this energetic particle-induced GAM (EGAM) to be unstable are discussed.' author: - 'Liu Chen$^{1, 2}$, Zhiyong Qiu$^{1}$ and Fulvio Zonca$^{3, 1}$' title: Short Wavelength Geodesic Acoustic Mode Excitation by Energetic Particles --- Recently, due to geodesic acoustic mode (GAM) [@NWinsorPoF1968] excitation by energetic particles (EPs) in the large drift orbit limit [@MSasakiPoP2016], there has been renewed interest in wave-particle resonances at short wavelength, which was firstly investigated in Ref. for the collisionless damping of GAM, and presented later providing detailed derivation and physics interpretation [@ZQiuPPCF2009]. The same approach was also applied to study quasi-linear transport of EPs by drift wave turbulence [@WZhangPRL2008; @ZFengPoP2013]. However, the understanding of the underlying physics processes proposed in recent literature, e.g. [@MSasakiPoP2016], may yield to some mis-interpretation and inconsistency with the existing theoretical framework. In this brief communication, our aim is to clarify the underlying physics processes for wave-particle resonance in the short wavelength limit and, as illustrative application, investigate EP-induced GAM (EGAM) [@RNazikianPRL2008; @GFuPRL2008; @ZQiuPPCF2010] excitation by fast ions with large magnetic drift orbits. To discuss the physics picture of wave-particle resonance in the short wavelength limit, we take electrostatic GAM collisionless damping originally discussed in [@FZoncaEPL2008; @ZQiuPPCF2009] as example. For the clarity of discussion, we assume small but finite electron temperature, i.e., $\tau\equiv T_e/T_i\ll1$ such that $|\widetilde{\delta\phi}_G/\overline{\delta\phi}_G|\sim \tau k_r\rho_{ti}\ll1$ while one still has $|\omega_{tr,e}|\gg|\omega_G|$. Here, $\widetilde{\delta\phi}_G $ and $\overline{\delta\phi}_G$ are respectively the $m\neq0$ and $m=0$ components of the perturbed scalar potential; $\omega_{tr}\equiv v_{\parallel}/(qR_0)$ is the transit frequency, $k_r$ is the radial wavenumber and $\rho_{ti}$ is the ion Larmor radius at thermal velocity. In this limit, consistent with the short wavelength assumption of interest here, the perturbed electron response (distribution function) to GAM is $\delta f_e=0$, and the GAM dispersion relation can be derived from the quasi-neutrality condition: $$\begin{aligned} \sum_{s}\langle \delta f_s\rangle=0.\label{eq:QN}\end{aligned}$$ Here, $\langle\cdots\rangle$ denotes velocity space integration, subscript $s$ denotes different ions species and, thus, equation (\[eq:QN\]) can also be applied to study EGAM excitation by EPs. $\delta f_s$ can be expressed as $\delta f_s=e\partial_EF_0\delta\phi/m+\exp[i(m_ic)/(eB^2)\mathbf{k}\times\mathbf{B}\cdot\mathbf{v}]\delta H$, and the nonadiabatic response can be derived from the following linear gyrokinetic equation [@PRutherfordPoF1968; @JTaylorPP1968]: $$\begin{aligned} \left(-i\omega+\omega_{tr}\partial_{\theta}+i\omega_d\right)\delta H_k=i\omega(e/m_i)\partial_E F_0 J_k\overline{\delta\phi}_G,\label{eq:LinearGKE}\end{aligned}$$ with $\omega_d=\hat{\omega}_d\sin\theta=k_r\rho_{ti}v_{ti}(v^2_{\perp}/2+v^2_{\parallel})/(v^2_{ti}R_0)\sin\theta$ being the magnetic drift frequency due to geodesic curvature, $J_k\equiv J_0(k_r\rho_{ti})$ with $J_0$ being Bessel function of zero-order accounting for finite Larmor radius effects, $v_{ti}\equiv \sqrt{2T_i/m_i}$ being the ion thermal velocity, $E=v^2/2$ and other notations are stardard. Noting that $\omega_G\simeq v_{ti}/R_0\sim q\omega_{tr,i}\gg\omega_{b,i}\simeq \sqrt{\epsilon}\omega_{tr,i}$, and assuming well circulating particles in the large aspect ratio limit, equation (\[eq:LinearGKE\]) can be solved and yields, for $v_{\parallel}>0$, $$\begin{aligned} \delta H_s=\frac{\omega}{\omega_{tr}}\hat{S}e^{-\psi(\theta)}\int^{\theta}_{-\infty} e^{\psi(\theta')}d\theta'.\label{eq:deltaH_general}\end{aligned}$$ Here, $\hat{S}\equiv -i(e/m_i)\partial_E F_0J_G\overline{\delta\phi}_G$, $\psi(\theta)\equiv -i(\omega\theta+\hat{\omega}_d\cos\theta)/\omega_{tr}$, $\omega_b$ is the bounce frequency of trapped particles, and $\epsilon\equiv r/R_0$ is the inverse aspect ratio. Similar expression can also be obtained for $v_{\parallel}<0$. Noting that $$\begin{aligned} e^{i\hat{\Lambda}cos\theta}=\sum_l i^lJ_l(\hat{\Lambda})e^{il\theta},\nonumber\end{aligned}$$ the integration in $\theta'$ in equation (\[eq:deltaH\_general\]) can be carried out by transforming into transit harmonics, and one obtains $$\begin{aligned} \delta H_s=i\omega\hat{S}\sum_p i^p J_p(\hat{\Lambda})e^{ip\theta}\sum_l \frac{(-i)^lJ_l(\hat{\Lambda})e^{il\theta}}{\omega-l\omega_{tr}}.\label{eq:Hi_harmonic}\end{aligned}$$ Here, $\hat{\Lambda}\equiv \hat{\omega}_d/\omega_{tr}$ and $\exp{(-i\hat{\Lambda}\cos\theta)}$ is the “pullback" (coordinate transformation) from drift orbit center to particle guiding center coordinates. The resonance condition is $\omega-l\omega_{tr}=0$, with $l$ being integer, and resonant particles satisfying $|v_{\parallel,res}/v_{ti}|\sim O(q/l)$ due to the GAM/EGAM frequency ordering. The subscript “res" denotes resonant particles. Furthermore, the “population" of particles for each transit resonances is proportional to $J^2_l(\hat{\Lambda})\partial_E F_0|_{v_{\parallel,res}}$. Noting that $\hat{\Lambda}_{res}\sim k_r\rho_i q^2/l$ and the properties of Bessel functions, one can truncate the summation in equation (\[eq:Hi\_harmonic\]) at finite $l$ [@FHintonPPCF1999; @HSugamaJPP2006] in the small drift orbit limit with $k_r\rho_i q^2\ll1$. GAM collisionless damping due to the primary transit resonance ($|\omega|=|\omega_{tr}|$) only was investigated in Ref. . It was shown by Sugama et al [@HSugamaJPP2006] that, for increasing $k_r\rho_{ti}q^2$, GAM collisionless damping can be significantly enhanced by the increasing weight of higher order transit resonances due to the finite orbit width effect; and the analytical expression including $|\omega|=2|\omega_{tr}|$ resonance was derived. By further increasing $\hat{\Lambda}_{res}$ due to larger $k_r$ or $q$, however, more and more transit resonances are needed for the accurate description of GAM collisionless damping [@XXuPRL2008], and the analytical expression is very difficult to obtain due to the non-trivial task of summing up all the transit resonances. An alternative approach was developed in Ref. [@FZoncaEPL2008], to derive the analytical expression of GAM collisionless damping rate in the short wavelength limit ($k_r\rho_iq^2\gg1$), with all the transit resonances taken into account. Here, we will first show that, the perturbed distribution function for resonant particles derived in Refs. [@FZoncaEPL2008; @ZQiuPPCF2009] are equivalent to the general solution of equations (\[eq:deltaH\_general\]) or (\[eq:Hi\_harmonic\]) in the proper limit, and then briefly summarize the main idea of this approach [@FZoncaEPL2008]; while interested readers may refer to Ref. [@ZQiuPPCF2009] for the detailed derivation. In the large orbit limit, equation (\[eq:deltaH\_general\]) can be expanded using the smallness parameter $1/\dot{\psi}$, with $|\dot{\psi}|\sim |\hat{\omega}_d/\omega_{tr}|\gg1$ in the large orbit limit and having denoted derivation of $\psi(\theta)$ with respect to $\theta$ as $\dot \psi$ for brevity. Noting that $$\begin{aligned} \int^{\theta}_{-\infty} e^{\psi(\theta')}d\theta'&=&\frac{e^{\psi}}{\dot{\psi}}-\frac{e^{\psi}}{\dot{\psi}}\frac{\partial}{\partial\theta}\frac{1}{\dot{\psi}}+\frac{e^{\psi}}{\dot{\psi}}\frac{\partial}{\partial\theta}\left(\frac{1}{\dot{\psi}}\frac{\partial}{\partial\theta}\frac{1}{\dot{\psi}}\right)\nonumber\\ &-& \int^{\theta}_{-\infty}e^{\psi(\theta')}\frac{\partial}{\partial\theta'}\left(\frac{1}{\dot{\psi}}\frac{\partial}{\partial\theta'}\left(\frac{1}{\dot{\psi}}\frac{\partial}{\partial\theta'}\frac{1}{\dot{\psi}}\right)\right)d\theta',\nonumber\end{aligned}$$ one then has $$\begin{aligned} \delta H_s&=&\frac{\omega}{\omega_{tr}}\hat{S}\left[\frac{1}{\dot{\psi}} -\frac{1}{2}\frac{\partial}{\partial\theta}\left(\frac{1}{\dot{\psi}}\right)^2+\frac{1}{2\dot{\psi}}\frac{\partial^2}{\partial \theta^2}\left(\frac{1}{\dot{\psi}}\right)^2\right.\nonumber\\ &&\hspace{11em}\left.+ O(\dot{\psi}^{(-4)})\right].\label{eq:deltaH_asym}\end{aligned}$$ Noting that $\dot{\psi}=-i (\omega-\hat{\omega}_d\sin\theta)/\omega_{tr}$, the three terms in the square bracket of equation (\[eq:deltaH\_asym\]) corresponds, respectively, to $\delta H^{(0)}_{res}$, $\delta H^{(1)}_{res}$ and $\delta H^{(2)}_{res}$ in equations (16), (21) and (23) of Ref. [@ZQiuPPCF2009], in the $T_e/T_i\ll1$ limit assumed here. Thus, the $\delta H_{res}$’s in Ref. [@ZQiuPPCF2009] are equivalent to the general solution of equation (\[eq:Hi\_harmonic\]) by summing up all the transit harmonics, and the underlying wave-paricle interactions in the short wavelength limit are indeed through transit resonances, as pointed out in Ref. [@ZQiuPPCF2009]. The first term in equation (\[eq:deltaH\_asym\]) corresponds to the perturbed resonant particle distribution function in the $q\rightarrow\infty$ limit; the third term gives the $O(1/q^2)$ corrections while the second term vanishes in the surface average. Since we are interested in the collisionless damping due to thermal ion contribution, a single thermal ion species with Maxwellian distribution function can be assumed, and the GAM dielectric function is derived from the surface averaged quasi-neutrality condition $$\begin{aligned} D_G\equiv \left.\left\langle -\frac{e}{T_i}F_0\overline{\delta\phi}_G+J_G\overline{\delta H_i} \right\rangle\right/\left(\frac{e}{T_i}n_0\overline{\delta\phi}_G\right).\nonumber\end{aligned}$$ The imaginary part of $D_G$ due to resonant particle contribution, to the leading order, is then $$\begin{aligned} D^{(0)}_i=\mathbb{I}{\rm m}\left\langle \frac{F_0}{n_0}J^2_G\omega\int \frac{d\theta}{2\pi}\frac{1}{\omega-\omega_d}\right\rangle.\label{eq:GAM_Di}\end{aligned}$$ We note that, even though in equation (\[eq:GAM\_Di\]) the anti-Herimitian part comes from the imaginary part of $1/(\omega-\omega_d)$, the underlying interaction is not a “drift resonance" [@MSasakiPoP2016], since $\omega_d\propto\sin\theta$ is temporally fast varying and the effective energy exchange is due to transit resonances as shown in equation (\[eq:Hi\_harmonic\]). The surface average is then carried out by expanding $\omega_d$ round $\theta=\pm \pi/2$ where $|\omega_d|$ is maximized and the integration in $\theta$ is performed by the method of steepest descent. Again, readers interested in the details of the algebra can consult Ref. . Here, we will briefly summarize the main ideas underlying the derivation: - considering the wave-particle interaction on the time scale of $|\omega_d|^{-1}$, which is much shorter than the transit time $|\omega_{tr}|^{-1}$ in the large orbit limit, corresponds to the inclusion of a broad spectrum in frequency, i.e., all the transit harmonics are taken into account; - for resonant particles, the dominant energy exchange with GAM is captured noting that the wave- particle energy exchange is caused by the acceleration in the radial direction associated with the radial magnetic drift, i.e., $\dot{E}=(e/m)\mathbf{V}_d\cdot\delta\mathbf{E}_r$, which maximises around $|\theta|=\pi/2$. Here, $V_d\equiv (v^2_{\perp}/2+v^2_{\parallel})\sin\theta \mathbf{e}_r/(\Omega_i R_0)$ is the radial component of magnetic drift velocity. - Noting again that $\omega_d\propto\sin\theta$ is maximized around $|\theta|=\pi/2$, ions with lower energy and thus, proportionally (exponentially for a Maxwellian distribution with typical parameters) larger population, will contribute to the resonance. As a further application, EGAM excitation by EPs in the large magnetic drift orbit limit will be investigated; which is part of the motivation of this communication. To focus on the wave-particle resonance in the short wavelength limit considering the effect of finite magnetic drift orbit averaging, we take $T_e/T_i\ll1$ and further neglect the finite Larmor radius effect of EPs. Thus, the leading order EP response to GAM can be derived as $$\begin{aligned} \delta H_h=-\frac{e}{m}\partial_E F_{0h}\overline{\delta\phi}_G\frac{\omega}{\omega-\omega_d},\nonumber\end{aligned}$$ and the linear dispersion relation of EGAM can be obtained from the quasi-neutrality condition $$\begin{aligned} \hat{\mathscr{E}}_{EGAM}\equiv\left.\left(\overline{\delta n_i}+\overline{\delta n_h}\right)\right/\left(en_0\overline{\delta\phi}_G/T_i\right).\nonumber\end{aligned}$$ As the expression of thermal ion density perturbation can be found in Ref. , we will focus on the EP density perturbation, $$\begin{aligned} \overline{\delta n_h} &=& -\frac{e}{m}B_0\sum_{\sigma=\pm1}\int \frac{E dE d\Lambda}{|v_{\parallel}|}\int d\theta \frac{\partial F_{0h}}{\partial E}\overline{\delta\phi}_G\frac{\omega_d}{\omega-\omega_d}.\nonumber\end{aligned}$$ Here, $\Lambda = \mu/E$ is the usual definition of the particle pitch angle in velocity space, with $\mu=v_\perp^2/(2B)$ the magnetic moment. Noting that $\omega_d=\hat{\omega}_d\sin\theta$ maximizes at $\theta\simeq \pi/2$, the contribution around $\theta\simeq \pm\pi/2$ dominates where wave-particle power exchange maximizes. Taking $x=\theta-\mbox{sign}(\theta) \pi/2$, one then has $$\begin{aligned} \int d\theta\frac{\omega_d}{\omega-\omega_d}&=&-2\pi+\omega\int^{\infty}_{-\infty} dx \frac{1}{\omega-\hat{\omega}_d(1-x^2/2)}\nonumber\\ &=&-2\pi\frac{i}{\sqrt{(2\hat{\omega}_d/\omega)(\hat{\omega}_d/\omega-1)}}.\label{eq:surface_averaged}\end{aligned}$$ In equation (\[eq:surface\_averaged\]), the contribution of non-resonant adiabatic particle response is neglected, and the perturbed EP density is then $$\begin{aligned} \overline{\delta n_h}&=&2\pi i B_0\frac{e}{m}\overline{\delta\phi}_G\nonumber\\ &\times& \sum_{\sigma=\pm1}\int\frac{EdEd\Lambda}{|v_{\parallel}|}\frac{\partial_EF_{0h}}{\sqrt{(2\hat{\omega}_d/\omega)(\hat{\omega}_d/\omega-1)}}.\end{aligned}$$ Taking a single-pitch angle slowing down EP distribution function [@ZQiuPPCF2010] as that for neutral beam injection, i.e., $F_{0h}=c_0\delta(\Lambda-\Lambda_0)H_E$, with $c_0=n_b\sqrt{2(1-\Lambda_0B_0)}/(4\pi B_0\ln(E_b/E_c))$, $n_b$ is the density of the EP beam, $E_b$ and $E_c$ being respectively the EP birth and critical energies, $\delta(x)$ is the Dirac delta function, and $H_E=1/(E^{3/2}+E^{3/2}_c)\Theta(1-E/E_b)$ with $\Theta(1-E/E_b)$ being the Heaviside step function. The integration in velocity space can then be carried out, and yields the short wavelength EGAM dispersion relation: $$\begin{aligned} \hat{b}_i\left(-1+\frac{\omega^2_G}{\omega^2}\right)+\Delta_f&+& i n_b\left[\frac{-2+3\Lambda_0B_0}{1-\Lambda_0B_0}\frac{\Omega_b}{\omega}\sqrt{\frac{\Omega_b}{\omega}-1}\right.\nonumber\\ &&\left.-\Lambda_0B_0\frac{(\omega/\Omega_b)^{1/2}}{\sqrt{\Omega_b/\omega-1}}\right]=0,\label{eq:DR_final}\end{aligned}$$ with $\Delta_f$ being the non-resonant EP contribution $$\begin{aligned} \Delta_f=n_b\left[\frac{-2+3\Lambda_0B_0}{1-\Lambda_0B}\frac{\Omega_b}{\omega}+\Lambda_0B_0\left(\frac{\omega}{\Omega_b}\right)^{1/2}\left(\frac{E_b}{E_c}\right)^{3/2}\right],\nonumber\end{aligned}$$ $\Omega_b\equiv\hat{\omega}_d(E=E_b)$, $\hat{b}_i=k^2_r\rho^2_{ti}/2$, and $\omega_G\simeq\sqrt{7/4+\tau}v_{ti}/R_0$ is the GAM frequency. ![Real frequency v.s. $\Omega_b/\omega_G$[]{data-label="fig:RF"}](EGAM_short_RF.eps){width="9cm"} The first term in the square bracket of equation (\[eq:DR\_final\]) ($\propto\sqrt{\Omega_b/\omega-1}$) could be the destabilizing term depending on the value of $\Lambda_0B_0$, while the second term ($\propto(\Omega_b/\omega-1)^{-1/2}$) is stabilizing. As a result, EGAM excitation in the large orbit limit requires, first, $$\begin{aligned} \Lambda_0B_0>2/3,\label{eq:drive}\end{aligned}$$ for the first term of EP contribution in equation (\[eq:DR\_final\]) to be destabilizing; and second, $\Omega_b/\omega$ being sufficiently large for the short wavelength EGAM to be unstable. ![Growth rate v.s. $\Omega_b/\omega_G$[]{data-label="fig:GR"}](EGAM_short_GR.eps){width="9cm"} The dispersion relation is solved numerically as a function of $\Omega_b/\omega_G$. Note that from our previous analysis, the drive due to wave-particle resonance exists only for $|\Omega_b/\omega|>1$; and that the destabilizing term increases with $|\Omega_b/\omega|$ while the stabilizing term decreases with $|\Omega_b/\omega|$, so the destabilizing term is neglected, which gives negligible contribution for $|\Omega_b/\omega|>1$ where wave-particle energy exchange exists. The other parameters are taken as follows: $n_b/\hat{b}_i=0.3$, $\Delta_f=0$, and the obtained short wavelength EGAM real frequency and growth rate dependences on $\Omega_b/\omega_G$ are shown in Figs. \[fig:RF\] and \[fig:GR\], respectively. It is shown that, the EGAM real frequency decreases slightly with increasing $\Omega_b/\omega_G$, and the unstable EGAM frequency is always smaller than local GAM frequency. On the other hand, as the EGAM is unstable for $\Omega_b/\omega_G>1$, the growth rate increases with $\Omega_b$. For $\Omega_b/\omega_G$ significantly larger than unity, the growth rate increases almost linearly with $\Omega_b/\omega_G$, and thus, $E_b$, as is clearly seen from the destabilizing term of equation (\[eq:DR\_final\]). This is due to the increasingly dense high order transit resonances associated with increasing $E_b$. Whereas, in the long wavelength limit, the growth rate will be peaked when the EP parallel velocity at birth energy satisfies a certain harmonic resonance, similar to the case for GAM Landau damping discussed in Ref. [@HSugamaJPP2006]. In conclusion, the underlying physics picture of wave-particle resonances at short wavelength is clarified, taking short wavelength GAM collisionless damping as an example. Assuming large aspect ratio tokamak and well circulating particles, the ion response to GAM is derived from linear gyrokinetic equation by integration along unperturbed guiding-center orbit. The general solution is then obtained by expansion into transit harmonics, with the “population" of resonant particles to each transit harmonic proportional to $J^2_l(\hat{\Lambda}_{res})\partial_E F_0|_{v_{\parallel,res}}$. As a result, to obtain the GAM collisionless damping in the short wavelength limit, all the transit resonances must be kept. It is then shown that, the result obtained in Ref. based on large orbit width expansion, is equivalent to the general solution up to $O(1/(k_r\rho_{ti}q^2))$; and the underlying physics for wave-particle interactions at short wavelength consists indeed in the summation of all the transit resonances. As a further application, the EGAM excitation at short wavelengths is also investigated, and the analytical dispersion relation is derived assuming a single pitch angle slowing down EP distribution function. Our results indicates that the short wavelength EGAM dispersion relation depends algebraically on the EP characteristic frequency, instead of the logarithmic dependence characterizing the long wavelength limit, which is typical for a slowing down EP distribution function. The short wavelength EGAM is unstable for $\Omega_b>\omega_G$, and $\Lambda_0B_0>2/3$. For $\Omega_b$ significantly larger than GAM frequency, the short wavelength EGAM growth rate is proportional to $\Omega_b$, and thus, EP birth energy $E_b$ due to the increasingly denser high order transit resonances as $\Omega_b\gg\omega_G$. This work is supported by US DoE GRANT, the National Magnet Confinement Fusion Research Program under Grants Nos. 2013GB104004 and 2013GB111004, the National Science Foundation of China under grant Nos. 11575157 and 11235009, Fundamental Research Fund for Chinese Central Universities under Grant No. 2017FZA3004 and EUROfusion Consortium under grant agreement No. 633053. [14]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , , , , , , , , , , , , , ****, (). , ****, (). , , , **** (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , ****, ().
{ "pile_set_name": "ArXiv" }
Q: How to improve SQL query performance in MySQL I have a MySQL table which stores backup log entries from a large number of devices (currently about 750). I need to use a query to get details of the last entry for each device. I am currently using a nested query to achieve this, which was working fine initially. However the table now has thousands of rows and the query is taking a long time to run. I would like to improve the performance of the query, and would like to know if this is possible through the use of joins instead of a nested select statement, or through some other improvement I could make. The current query is: SELECT id, pcname, pcuser, checkTime as lastCheckTime, TIMESTAMPDIFF(SECOND,checkTime,now()) as lastCheckAge, lastBackup, diff, threshold, backupStatus, TIMESTAMPDIFF(SECOND,lastBackup,now()) as ageNow FROM checkresult WHERE (checkTime, pcname) IN ( SELECT max(checkTime), pcname FROM checkresult GROUP BY pcname ) ORDER BY id desc; id is the primary key for the table, and is the only indexed column. The table uses InnoDB A: Try using an explicit join instead: SELECT id, checkresult.pcname, pcuser, checkTime as lastCheckTime, TIMESTAMPDIFF(SECOND,checkTime,now()) as lastCheckAge, lastBackup, diff, threshold, backupStatus, TIMESTAMPDIFF(SECOND,lastBackup,now()) as ageNow FROM checkresult join (SELECT pcname, max(checkTime) as maxct FROM checkresult GROUP BY pcname ) pm on checkresult.pcname = pm.pcname and checkresult.checkTime = pm.maxct ORDER BY id desc;
{ "pile_set_name": "StackExchange" }
Q: Rotate Image and View Next Page I’m using WinForms for my application. I’m building an image viewer. My application opens image documents (.tif) files. The application has the ability to go to the next page. The issue is, that every time I try to rotate the image and click next, the image stays on the same page but the page number increments. Why can’t I see the images when it’s on rotate? How can I rotate the image and go to the next page? In the link below I've provided a test tif document for testing purposes: http://www.filedropper.com/sampletifdocument5pages My Code: FileStream _stream; Image _myImg; // setting the selected tiff string _fileName; private int intCurrPage = 0; // defining the current page private int intTotalPages = 0; private void Open_Btn_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { lblFile.Text = openFileDialog1.FileName; // Before loading you should check the file type is an image if (_myImg == null) { _fileName = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString("N")); File.Copy(@lblFile.Text, _fileName); _stream = new FileStream(_fileName, FileMode.Open, FileAccess.Read); pictureBox1.Image = Image.FromStream(_stream); } //pictureBox1.Image = Image.FromFile(openFileDialog1.FileName); pictureBox1.Size = new Size(750, 1100); // Reset the current page when loading a new image. intCurrPage = 1; intTotalPages = pictureBox1.Image.GetFrameCount(System.Drawing.Imaging.FrameDimension.Page); lblNumPages.Text = intTotalPages.ToString(); lblCurrPage.Text = "1"; } } private void NextPage_btn_Click(object sender, EventArgs e) { if (intCurrPage <= (intTotalPages - 1)) { if(Radio_90_Rotate.Checked) { pictureBox1.Image.RotateFlip(RotateFlipType.Rotate90FlipNone); } if(Radio_180_Rotate.Checked) { pictureBox1.Image.RotateFlip(RotateFlipType.Rotate180FlipNone); } // Directly increment the active frame within the image already in the PictureBox pictureBox1.Image.SelectActiveFrame(System.Drawing.Imaging.FrameDimension.Page, intCurrPage); //page increment (Go to next page) intCurrPage++; // Refresh the PictureBox so that it will show the currently active frame pictureBox1.Refresh(); lblCurrPage.Text = intCurrPage.ToString(); } } A: The RotateFlip function will change the source image and flatten it to only one page. This means we need to make copies each time you view a new page that has rotation applied. In this solution, I use the source image and simply change pages when no rotation is applied. But when rotation is set, then a Image copy is made for each page and then the rotation is applied to the copy only. Using your sample image it takes time to load each page. So I implemented a simple label message to let the user know it's working. Also, you may consider looking into classes prebuilt for tiff files like: https://bitmiracle.github.io/libtiff.net/ private Image _Source = null; private int _TotalPages = 0; private int _CurrentPage = 0; private void Frm_TiffViewer_Load(object sender, EventArgs e) { lbl_WaitMessage.Visible = false; // These two options can be adjusted as needed and probably should be set in the form control properties directly: pictureBox1.Size = new Size(750, 1100); pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage; } private void ShowProcessingImageLabel() { lbl_WaitMessage.Visible = true; Application.DoEvents(); } private void DisplayPage(int PageNumber, RotateFlipType Change) { if (pictureBox1.Image != null && pictureBox1.Image != _Source) { // Release memory for old rotated image pictureBox1.Image.Dispose(); } // set the variable to null for easy GC cleanup pictureBox1.Image = null; _Source.SelectActiveFrame(System.Drawing.Imaging.FrameDimension.Page, PageNumber - 1); pictureBox1.Image = new Bitmap(_Source); pictureBox1.Image.RotateFlip(Change); pictureBox1.Refresh(); } private void DisplayPage(int PageNumber) { ShowProcessingImageLabel(); this.lblCurrPage.Text = PageNumber.ToString(); // You could adjust the PictureBox size here for each frame OR adjust the image to fit the picturebox nicely. if (Radio_90_Rotate.Checked == true) { DisplayPage(PageNumber, RotateFlipType.Rotate90FlipNone); lbl_WaitMessage.Visible = false; return; } else if (Radio_180_Rotate.Checked == true) { DisplayPage(PageNumber, RotateFlipType.Rotate180FlipNone); lbl_WaitMessage.Visible = false; return; } if (pictureBox1.Image != _Source) { if (pictureBox1.Image != null) { // Release memory for old copy and set the variable to null for easy GC cleanup pictureBox1.Image.Dispose(); pictureBox1.Image = null; } pictureBox1.Image = _Source; } pictureBox1.Image.SelectActiveFrame(System.Drawing.Imaging.FrameDimension.Page, PageNumber-1); pictureBox1.Refresh(); lbl_WaitMessage.Visible = false; } private void Open_Btn_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { // Before loading you should check the file type is an image this._Source = Image.FromFile(openFileDialog1.FileName); _TotalPages = _Source.GetFrameCount(System.Drawing.Imaging.FrameDimension.Page); _CurrentPage = 1; lblCurrPage.Text = "1"; lblFile.Text = openFileDialog1.FileName; this.lblNumPages.Text = _TotalPages.ToString(); DisplayPage(_CurrentPage); } } private void NextPage_btn_Click(object sender, EventArgs e) { if (_CurrentPage < _TotalPages) { _CurrentPage++; } DisplayPage(_CurrentPage); } private void b_Previous_Click(object sender, EventArgs e) { if (_CurrentPage > 1) { _CurrentPage--; } DisplayPage(_CurrentPage); } private void Radio_90_Rotate_CheckedChanged(object sender, EventArgs e) { DisplayPage(_CurrentPage); } private void Radio_180_Rotate_CheckedChanged(object sender, EventArgs e) { DisplayPage(_CurrentPage); } private void Radio_0_Default_CheckedChanged(object sender, EventArgs e) { DisplayPage(_CurrentPage); }
{ "pile_set_name": "StackExchange" }
Q: Python SQLite3: why does indenting the conn.commit() statement have this impact? I create a simple SQLite table as follows: import sqlite3 sqlite_file = '/Users/Dom/Desktop/Test.sqlite' conn = sqlite3.connect(sqlite_file) c = conn.cursor() c.execute('''CREATE TABLE Results(Col1 text, Col2 text)''') c.execute('''CREATE TABLE ListIDTable(Int numeric, ID text)''') values_to_insert = [(1,"1a"), (2,"1b"), (3,"2a"), (4,"2b"), (5,"3a"), (6,"3b"), (7,"4a"), (8,"4b"), (9,"5a"), (10,"5b"), (11,"6a"), (12,"6b"), (13,"7a"), (14,"7b") ] c.executemany("INSERT INTO ListIdTable (Int, ID) values (?,?)", values_to_insert) conn.commit() conn.close() Everything looks good. I then loop through the table created above as follows: import sqlite3 sqlite_file = '/Users/Dom/Desktop/Test.sqlite' conn = sqlite3.connect(sqlite_file) conn.row_factory = sqlite3.Row c = conn.cursor() c2 = conn.cursor() c.execute('select * from ListIDTable ORDER BY Int ASC') for r in c: TblInt = r['Int'] print (TblInt) c2.execute("INSERT INTO Results (Col1 , Col2) values (?,?)", ("XXX", "YYY")) conn.commit() I expect an output of: "1, 2, 3, 4, 5, 6..." etc. However, I get: "1, 2, 1, 2, 3, 4, 5, 6..." etc. When I remove the indent of the final conn.commit() statement, I get the expected output. Can someone help me understand why just the "1 & 2" are repeated, but everything then proceeds as normal? Thanks A: The following page identify and explain the issue: Using multiple cursors in a nested loop in sqlite3 from python-2.7 To fix the above question, I essentially pulled the entire lookup table and eliminated the need for multiple cursors: import sqlite3 sqlite_file = '/Users/Dom/Desktop/Test.sqlite' conn = sqlite3.connect(sqlite_file) conn.row_factory = sqlite3.Row c = conn.cursor() c.execute('select * from ListIDTable ORDER BY Int ASC') rows = c.fetchall() for r in rows: TblInt = r['Int'] print (TblInt) c.execute("INSERT INTO Results (Col1 , Col2) values (?,?)", ("XXX", "YYY")) conn.commit()
{ "pile_set_name": "StackExchange" }
Öz Ürügülü is a very little-known sextet from Switzerland. Indeed, at this moment, they have slightly more than 420 likes on facebook. Fortunately, this is by no means a measure of a band’s talent. I came across Öz Ürügülü about a year ago, and was very impressed by their debut, Forgotten Archives. Earlier this week, I got an automated email from bandcamp telling me that they’ve just released a new album: Fashion and Welfare.
{ "pile_set_name": "Pile-CC" }
[Analyses on the characteristics and the trends of pneumoconiosis notified between 1997 and 2009, in China]. To describe the incidence of pneumoconiosis reported in China from 1997 to 2009 and investigate the epidemiological trends and characteristics of pneumoconiosis, and to provide basic data for formulating the guidelines and policies for control of pneumoconiosis, research on pneumoconiosis, and establishing the time series model for monitoring and early warning of pneumoconiosis. The national database of new cases of pneumoconiosis reported from 1997 to 2009 was subjected to systematic arrangement, descriptive analysis, and trend test using SPSS 15.0. The statistical indices included number of new pneumoconiosis cases in each year, types of pneumoconiosis, regional and industrial distributions of pneumoconiosis cases, work types of pneumoconiosis cases, and the annual changes in mean length of service and mean age at the onset of pneumoconiosis. From 1997 to 2009, a total of 122 333 new cases of pneumoconiosis were reported; the number of new cases increased since 1998, but fell to 7620 in 2003, and then it increased again to a maximum of 12 492 in 2009. Of all patients, 87.5% were cases of coal-workers' pneumoconiosis and silicosis; 54 068 (44.2%) were coal-workers' pneumoconiosis cases, and 52 930 (43.3%) were silicosis cases. The pneumoconiosis cases were distributed mainly in Hunan Province (12 995 cases, 10.6%), Shandong Province (8952 cases, 7.3%), and Sichuan Province (8417 cases, 6.9%). Most cases were distributed in coal industry (61270 cases, 50.1%), architectural, material industry (9754 cases, 8.0%), nonferrous metals industry (9380 cases, 7.7%), and metallurgical industry (8773 cases, 7.2%). The work types of these cases mainly included tunneling as the main work (15 659 cases, 12.8%), mining as the main work (15 009 cases, 12.3%), drilling (14 010 cases, 11.5%), tunneling (12 122 cases, 9.9%), and hybrid coalmine work (10 612 cases, 8.7%). The mean length of service at the onset of pneumoconiosis in new cases of pneumoconiosis was shortened from 1997 to 2009, with a median length of service of 20.00 years; the median lengths of service at the onsets of coal-workers' pneumoconiosis, silicosis, and asbestosis were 21.58, 17.00, and 20.00 years, respectively. The median age at the onset of pneumoconiosis was 51.00 years, and the mean age of onset in new cases of pneumoconiosis increased over the 13 years. The incidence of pneumoconiosis is still high, with a marked concentrated trend in several industries, work types, and pneumoconiosis types, a marked rising trend in number of new cases, and a marked shortening trend in length of service at the onset of pneumoconiosis. The prevention and control of pneumoconiosis should be enhanced in key industries and for people engaging in key types of work according to the epidemiological characteristics of pneumoconiosis. In addition, the demonstration project of comprehensive prevention and control of occupational dust hazards should be carried out, and the monitoring and early warning system for pneumoconiosis should be established.
{ "pile_set_name": "PubMed Abstracts" }
Q: Utilizar função de validação em diferentes campos HTML Tenho a seguinte função em javascript para validação do NIF: //VALIDAÇÃO NIF validaContribuinte = function(){ var contribuinte = $('#nif').val(); var temErro=0; if ( contribuinte.substr(0,1) != '1' && // pessoa singular contribuinte.substr(0,1) != '2' && // pessoa singular contribuinte.substr(0,1) != '3' && // pessoa singular contribuinte.substr(0,2) != '45' && // pessoa singular não residente contribuinte.substr(0,1) != '5' && // pessoa colectiva contribuinte.substr(0,1) != '6' && // administração pública contribuinte.substr(0,2) != '70' && // herança indivisa contribuinte.substr(0,2) != '71' && // pessoa colectiva não residente contribuinte.substr(0,2) != '72' && // fundos de investimento contribuinte.substr(0,2) != '77' && // atribuição oficiosa contribuinte.substr(0,2) != '79' && // regime excepcional contribuinte.substr(0,1) != '8' && // empresário em nome individual (extinto) contribuinte.substr(0,2) != '90' && // condominios e sociedades irregulares contribuinte.substr(0,2) != '91' && // condominios e sociedades irregulares contribuinte.substr(0,2) != '98' && // não residentes contribuinte.substr(0,2) != '99' // sociedades civis ) { temErro=1;} var check1 = contribuinte.substr(0,1)*9; var check2 = contribuinte.substr(1,1)*8; var check3 = contribuinte.substr(2,1)*7; var check4 = contribuinte.substr(3,1)*6; var check5 = contribuinte.substr(4,1)*5; var check6 = contribuinte.substr(5,1)*4; var check7 = contribuinte.substr(6,1)*3; var check8 = contribuinte.substr(7,1)*2; var total= check1 + check2 + check3 + check4 + check5 + check6 + check7 + check8; var divisao= total / 11; var modulo11=total - parseInt(divisao)*11; if ( modulo11==1 || modulo11==0){ comparador=0; } // excepção else { comparador= 11-modulo11;} var ultimoDigito=contribuinte.substr(8,1)*1; if ( ultimoDigito != comparador ){ temErro=1;} if (temErro==1){ alert('NIF Inválido' ); $('#nif').val(""); } } Como posso adaptar esta função para fazer esta validação em campos de HTML diferentes? Neste caso estou a passar o id do campo nif: var contribuinte = $('#nif').val();. Se eu quiser fazer a validação para dois inputs diferentes como posso fazê-lo? A: Você precisa generalizar sua função para poder chamar para cada campo. Em vez de procurar pelo "$('#nif')", procure por um campo genérico - que pode ser "nif1" ou "nif2", ou qualquer que seja o ID dele, especificado por um parâmetro da função de validação: $('#nif1').change(function(){ validaContribuinte('nif1'); }); $('#nif2').change(function(){ validaContribuinte('nif2'); }); //VALIDAÇÃO NIF validaContribuinte = function(inputID){ var contribuinte = $('#'+inputID).val(); var temErro=0; if ( contribuinte.substr(0,1) != '1' && // pessoa singular contribuinte.substr(0,1) != '2' && // pessoa singular contribuinte.substr(0,1) != '3' && // pessoa singular contribuinte.substr(0,2) != '45' && // pessoa singular não residente contribuinte.substr(0,1) != '5' && // pessoa colectiva contribuinte.substr(0,1) != '6' && // administração pública contribuinte.substr(0,2) != '70' && // herança indivisa contribuinte.substr(0,2) != '71' && // pessoa colectiva não residente contribuinte.substr(0,2) != '72' && // fundos de investimento contribuinte.substr(0,2) != '77' && // atribuição oficiosa contribuinte.substr(0,2) != '79' && // regime excepcional contribuinte.substr(0,1) != '8' && // empresário em nome individual (extinto) contribuinte.substr(0,2) != '90' && // condominios e sociedades irregulares contribuinte.substr(0,2) != '91' && // condominios e sociedades irregulares contribuinte.substr(0,2) != '98' && // não residentes contribuinte.substr(0,2) != '99' // sociedades civis ) { temErro=1;} var check1 = contribuinte.substr(0,1)*9; var check2 = contribuinte.substr(1,1)*8; var check3 = contribuinte.substr(2,1)*7; var check4 = contribuinte.substr(3,1)*6; var check5 = contribuinte.substr(4,1)*5; var check6 = contribuinte.substr(5,1)*4; var check7 = contribuinte.substr(6,1)*3; var check8 = contribuinte.substr(7,1)*2; var total= check1 + check2 + check3 + check4 + check5 + check6 + check7 + check8; var divisao= total / 11; var modulo11=total - parseInt(divisao)*11; if ( modulo11==1 || modulo11==0){ comparador=0; } // excepção else { comparador= 11-modulo11;} var ultimoDigito=contribuinte.substr(8,1)*1; if ( ultimoDigito != comparador ){ temErro=1;} if (temErro==1){ alert('NIF Inválido' ); $('#'+inputID).val(""); } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <p> <label for="nif1">NIF #1:</label> <input type="text" id="nif1" /> </p> <p> <label for="nif2">NIF #2:</label> <input type="text" id="nif2" /> </p>
{ "pile_set_name": "StackExchange" }
Hand osteoarthritis and bone mineral density in postmenopausal women; clinical relevance to hand function, pain and disability. The aim of the present study was to assess phalangeal bone mineral density (BMD) in postmenopausal females with hand osteoarthritis (OA) and to correlate the measured levels with the radiographic OA grade, pain, function and disability of the hand. The study group constituted 40 postmenopausal women with hand OA (range; 45-83 years). Socio-demographic data were collected. They underwent a comprehensive clinical examination of joint status and health outcome measure including Australian Canadian (AUSCAN) OA hand index. Hand radiographs were quantified and graded according to Kellgren and Lawrence (K-L) scoring system. Bone mineral content (BMC) and BMD of the third finger were measured using the accuDEXA (Schick, New York, NY). Twenty females matched for age and years of menopause were studied as a control group. Phalangeal BMC and BMD were significantly reduced in women with hand OA compared to controls and related to radiological erosive OA. The AUSCAN pain and function subscales were worse in proportion to the severity of hand OA. OA X-ray score was significantly associated with reduced right grip strength, pain, and function scales while, decreased BMD was related to Ritchie index and pain scale. Postmenopausal women with clinical and radiological hand erosive OA are at risk of development of hand osteoporosis (OP). Phalangeal bone densitometry is an objective reproducible investigation. Poor physical function due to increased pain associated with increasing severity of radiographic hand OA leads to worse BMD results.
{ "pile_set_name": "PubMed Abstracts" }
Job Information If you are looking for an opportunity to enrich the lives of others and you share our passion for making a difference in people’s lives, come join our team! Our residents are the reason we choose to deliver high quality care and services in a home-like setting. We offer competitive wages, benefits, training, and the opportunity for growth. We welcome you to apply & join our family today! Responsibilities Have a passion for helping people? Whether you are starting your career or a seasoned CNA or you simply have the heart for helping people, then Brookdale is for you. Our Certified Nursing Assistant-CAN’s provides direct care to senior residents, assist in maintaining a safe environment, and provides a high level of care on a daily basis. Become part of our family, grow your skills and career, and have the satisfaction of helping make seniors’ lives brighter every day. Qualifications What it takes to be a Certified Nursing Assistant – CNA at Brookdale: Our CNAs work with community management to provide seniors with personalized care, and give resident status updates at the beginning and end of each shift. Nursing assistants check in with residents, assist with dining and personal care needs, and perform vital sign checks and clinical procedures according to community policy.
{ "pile_set_name": "Pile-CC" }
Youth Art Month - Check out the Indiana Youth Art Month movement and see what Hoosier artists from across the state are doing! Word of the month:Chivalry:The qualities idealized by knighthood, such as bravery, courtesy, honor and gallantry toward women. When Bill opened the door for his neighbor, she told him he was a chivalrous young man.
{ "pile_set_name": "Pile-CC" }
Q: Intuition into why the wave equation needs the second initial condition (e.g. velocity) Given the wave equation $$u_{xx}(x,t)=\frac {1}{c^2}u_{tt}(x,t) $$ with initial conditions: IC1: $$u(x,0)= f(x)$$ IC2: $$u_{t}(x,0)= g(x)$$ Why isn't $g(x)$ always equal to $f_t(x)$? For example, if $t=0$ is the time that a snapshot is taken of a freely traveling wave it seems to me that it must be true that $g(x)=f_t(x)$ Then IC1 would be the only initial condition needed since IC2 could be derived from IC1. My question: Then why isn't only one initial condition needed? Maybe if the wave was not freely traveling $g(x)$ could be forced to be something else--but that's not obvious to me. Physical examples would be great. ( I know that mathematically since the equation is second order it needs two initial conditions but I don't understand it intuitively or physically.) A: Note that f(x) and g(x) are functions of x alone [not "x and t"]. f(x) is what the string looks like on a photo [taken at t=0]... and you don't have access to other snapshots. That is, "f" doesn't have information on how the string is moving. g(x) is what the velocity profile would look like at t=0. Analogously, for a particle, you need to know "where it is" at t=0 and "what its velocity is" at t=0 to predict its future.
{ "pile_set_name": "StackExchange" }
/// @ref gtx_gradient_paint /// @file glm/gtx/gradient_paint.inl namespace glm { template<typename T, qualifier Q> GLM_FUNC_QUALIFIER T radialGradient ( vec<2, T, Q> const& Center, T const& Radius, vec<2, T, Q> const& Focal, vec<2, T, Q> const& Position ) { vec<2, T, Q> F = Focal - Center; vec<2, T, Q> D = Position - Focal; T Radius2 = pow2(Radius); T Fx2 = pow2(F.x); T Fy2 = pow2(F.y); T Numerator = (D.x * F.x + D.y * F.y) + sqrt(Radius2 * (pow2(D.x) + pow2(D.y)) - pow2(D.x * F.y - D.y * F.x)); T Denominator = Radius2 - (Fx2 + Fy2); return Numerator / Denominator; } template<typename T, qualifier Q> GLM_FUNC_QUALIFIER T linearGradient ( vec<2, T, Q> const& Point0, vec<2, T, Q> const& Point1, vec<2, T, Q> const& Position ) { vec<2, T, Q> Dist = Point1 - Point0; return (Dist.x * (Position.x - Point0.x) + Dist.y * (Position.y - Point0.y)) / glm::dot(Dist, Dist); } }//namespace glm
{ "pile_set_name": "Github" }
Q: Jenkins GitHub plugin : failed to validate the account I'm trying to configure the GitHub server in Jenkins Configure settings tab to set up webhooks. I chose my credentials from the drop-down menu (secret text using a GitHub Personal Access token) but when I click on Test connection I always get "Failed to validate the account". Screenshot of github server options in Jenkins While checking the Jenkins logs, I see the following : Sep 20, 2017 5:06:23 PM WARNING org.jenkinsci.plugins.github.internal.GitHubLoginFunction applyNullSafe Failed to login with creds "..." java.net.UnknownHostException: api.github.com: Temporary failure in name resolution at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at com.squareup.okhttp.Dns$1.lookup(Dns.java:39) at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175) at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141) at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83) at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174) at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126) at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95) at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281) at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224) at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:450) at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:399) at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:527) at com.squareup.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:105) at com.squareup.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:25) at org.kohsuke.github.Requester.parse(Requester.java:592) Caused: org.kohsuke.github.HttpException: Server returned HTTP response code: -1, message: 'null' for URL: https://api.github.com/user at org.kohsuke.github.Requester.parse(Requester.java:622) at org.kohsuke.github.Requester.parse(Requester.java:584) at org.kohsuke.github.Requester._to(Requester.java:264) at org.kohsuke.github.Requester.to(Requester.java:226) at org.kohsuke.github.GitHub.getMyself(GitHub.java:361) at org.kohsuke.github.GitHub.<init>(GitHub.java:153) at org.kohsuke.github.GitHubBuilder.build(GitHubBuilder.java:201) at org.jenkinsci.plugins.github.internal.GitHubLoginFunction.applyNullSafe(GitHubLoginFunction.java:73) at org.jenkinsci.plugins.github.internal.GitHubLoginFunction.applyNullSafe(GitHubLoginFunction.java:46) at org.jenkinsci.plugins.github.util.misc.NullSafeFunction.apply(NullSafeFunction.java:18) at org.jenkinsci.plugins.github.config.GitHubServerConfig$DescriptorImpl.doVerifyCredentials(GitHubServerConfig.java:372) at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627) at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:343) at org.kohsuke.stapler.Function.bindAndInvoke(Function.java:184) at org.kohsuke.stapler.Function.bindAndInvokeAndServeResponse(Function.java:117) at org.kohsuke.stapler.MetaClass$1.doDispatch(MetaClass.java:129) at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58) at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:715) at org.kohsuke.stapler.Stapler.invoke(Stapler.java:845) at org.kohsuke.stapler.MetaClass$5.doDispatch(MetaClass.java:248) at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58) at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:715) at org.kohsuke.stapler.Stapler.invoke(Stapler.java:845) at org.kohsuke.stapler.Stapler.invoke(Stapler.java:649) at org.kohsuke.stapler.Stapler.service(Stapler.java:238) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669) at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:135) at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:138) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:80) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84) at hudson.security.UnwrapSecurityExceptionFilter.doFilter(UnwrapSecurityExceptionFilter.java:51) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at jenkins.security.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:117) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.providers.anonymous.AnonymousProcessingFilter.doFilter(AnonymousProcessingFilter.java:125) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.ui.rememberme.RememberMeProcessingFilter.doFilter(RememberMeProcessingFilter.java:142) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:271) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at jenkins.security.BasicHeaderProcessor.doFilter(BasicHeaderProcessor.java:92) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249) at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:67) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:90) at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:171) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:49) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:82) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.kohsuke.stapler.DiagnosticThreadNameFilter.doFilter(DiagnosticThreadNameFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:553) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544) at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Trying to ping api.github.com through the command line works without problem. I can't understand what's causing this issue. A: Did you try to create the credentials under global jenkins configuration, and the credentials are for the github request builder plugin.
{ "pile_set_name": "StackExchange" }
WCC workshop explores 'transformative masculinities' WCC workshop explores 'transformative masculinities' By agency reporter 5 Dec 2011 Violence against women is endemic both in conflict zones and so called peaceful situations. According to a study conducted by the University of Lausanne, in Switzerland, more than 26 per cent of women there experience physical violence throughout their adult life. In a country of a similar size but with fewer economic opportunities such as the Dominican Republic, every 36 hours a woman is killed by her partner or husband. This culture of violence was strongly challenged in a World Council of Churches (WCC) workshop on “transformative masculinities” focusing on a Caribbean situation. The workshop took place last month in Havana, Cuba, addressing the theme “From Hegemony to Partnership”, and was organised by the WCC programme of Women in Church and Society, in collaboration with the Christian Institute of Gender Studies (ICG) of Cuba and the Caribbean and North America Council for Mission (CANACOM). Around eighteen church participants from the Dominican Republic, Guyana, Trinidad and Tobago, Puerto Rico, Jamaica, Cuba and Curacao attended the workshop. Together they made a concrete analysis on how churches can be engaged in awareness raising about gender issues, and counter stereotyping of women and men. The Rev Dr Ofelia Ortega Suárez, WCC president for the Caribbean and Latin America, shared the importance of including gender perspectives in the educational system for pastors. She stressed that silence of the churches reinforces situations of violence against women. For Dr Fulata Lusungu Moyo, WCC programme executive for Women in Church and Society, this workshop provided a unique space for churches in the Caribbean region to identify existing resources that lead to deconstructing hegemonic masculinities of violence. “It is crucial to reconstruct transformative masculinities, which lead to mutual partnership for an environment of justice and peace. Therefore, the experience in the Caribbean will be contextualised by other regions, since every society has its own ways of how feminine and masculine identities are constructed and understood,” said Moyo. To unravel various forms of masculinity from a faith perspective, the gender training manual Created in God’s Image: From Hegemony to Partnership was introduced. The manual was jointly published by the WCC and the World Communion of Reformed Churches in 2010, and is being translated into French and Spanish. The manual was aimed at developing an understanding that gender diversity does not necessarily lead to a relationship of domination. It encourages the hegemonic gender paradigms in Christian history to be questioned. It also serves as a training tool to build communities of women and men, ensuring gender justice. The key facilitators for the workshop were Dr Isabel Moya Richards and Julio César González Pagés, who are scholars of communications and anthropology. According to them even after a significant historical change, such as the Cuban revolution, the society continues to reproduce conservative expressions of masculinity and femininity. Such expressions, they said, are tangible in the use of sexist language and the objectification of women’s bodies by the commercial media, which emphasizes “macho” attitudes as the only possible expression of masculinity. Changing the understanding of what it takes to be a man and a woman, acknowledging that both are made in the image of God, can be challenging. This is especially important in the face of contradictory messages of violence that we receive from media and dominant social behaviours. There are significant efforts to be taken into consideration, such as the work of the Evangelical Seminary in Matanzas, Cuba, where gender is a transversal subject. This information was presented by Dr Reinerio Arce Valentín, the director of the seminary. The workshop received consistent support from the Stichting Rotterdam Fund, along with the strong involvement of Moraima González, coordinator of ICG.
{ "pile_set_name": "Pile-CC" }
Use of directed history and behavioral indicators in the assessment of the child with a developmental disability. It can be very difficult to get a complete history and review of systems for children with developmental disabilities and poor communication skills. In addition, many children with developmental disabilities may engage in self-injurious or aggressive behavior. Although the causes of inappropriate behavior are frequently environmental, physiologic components may exist as well, particularly pain or discomfort. History taking must be focused and specific and may need to focus on the child's behavioral patterns, because the child may not have sufficient communication skills to describe his or her problem and parents or guardians may not realize the relevance of certain behaviors. Gastrointestinal problems in particular may be a source of discomfort and should be reviewed with particular care. Referral to a psychologist who is able to perform a functional analysis of behavior may be necessary to treat problem behavior, especially if medical causes have been ruled out.
{ "pile_set_name": "PubMed Abstracts" }
Friday 21 Special Event, Educator Event(Education) Friday December 21, 2018 10:00 AM Inviting all Educators to come treat themselves to 25% off almost everything in store, and 10% off Cafe treats during this holiday shopping event. While in store you can receive a free tall, hot or iced coffee with your educator card. Contact your local store for details. Saturday 29 Special Event, Storytime, Children's Event(Childrens) Saturday December 29, 2018 11:00 AM From the author of The Day the Crayons Quit comes a laugh-out-loud hilarious picture book about the epic tale of the classic game Rock, Paper, Scissors. Join us for Storytime and get a coupon from our Café for a grilled cheese sandwich with milk or juice for $4! Saturday 05 Special Event, Educator Event(Education) Saturday January 05, 2019 10:00 AM Pre-K through grade 12 educators, join us in store every Saturday and Sunday in January and enjoy 25% off most books, toys, games, movies, music, and more. Plus, shop online at bn.com on Saturday, January 26th and Sunday, January 27th and save 25% on most orders. Special Event, Children's Event, Storytime(Childrens) Saturday January 05, 2019 11:00 AM Clifford is an adorable dog whose well-meaning bumblings have great kid appeal especially for his owner, Emily Elizabeth. Join us as we read about everyone's favorite big red dog. Plus, get a coupon from our Café for a grilled cheese sandwich with milk or juice for $4.
{ "pile_set_name": "Pile-CC" }
Q: AMD HP Laptop - Screen Tearing I recently defenestrated this laptop and I'm having some issues with the graphics. I'm seeing some pretty nasty screen tearing on this thing. I didn't notice any while the machine had windows 8 (a brief amount of time, admittedly) which leads me to suspect some kind of driver issue. Googling led me to some tearing issues from ATI drivers and the new version of X but since most posts are a bit dated I don't know if that is what is happening to me. The machine is a g6-2211nr model with an AMD A4-4300 APU and it's running 12.10. The computer details dialog is showing "Driver: Unknown" Any ideas? EDIT: I installed the proprietary drivers. Upon reboot I reached the login screen and it looks flawless. However, after accessing my account, I can only see the desktop background and the mouse pointer. Nothing else. I successfully reset everything. I'm back in but the screen tearing is back. I checked to see if the tearing appears during the login screen with this "working" configuration that uses the opensource drivers and no, tere is no tearing during the login screen no matter how hard I try to bring it about. Here is the output of lshw -c video right now: *-display description: VGA compatible controller product: Advanced Micro Devices [AMD] nee ATI vendor: Advanced Micro Devices [AMD] nee ATI physical id: 1 bus info: pci@0000:00:01.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:48 memory:e0000000-efffffff ioport:3000(size=256) memory:f0300000-f033ffff Aside from a fix, isn't there a way to just turn off hardware acceleration altogether? Just CPU render the whole thing? A: Have you installed the proprietary AMD graphics drivers? If you haven't you should do that. Many AMD cards don't work very well with the included open source driver. If you are on Ubuntu 12.10, open the Ubuntu Software Center, go to Preferences -> Software Sources and look under the "Additional Drivers" tab. Check out this answer. If you use Ubuntu 12.04, start the "Additional Drivers" untility directly from the dash. (Just type "drivers" or "jockey") Guide here. BTW, AMD drivers are a lot more stable on 12.04. So if you don't already use that and don't get it fixed on 12.10, you might consider using 12.04.
{ "pile_set_name": "StackExchange" }
Already have an account? Sign in Amir Hussain proves he’s certainly no flash in the pan as he produces another fine helping of heart-stirring uplifting trance. With many successful solo outings under his belt, plus collabs with trance titans Allen Watts and Tangle to name but a few, Amir really continues to up his game and none more so than his follow up to 2013’s ASOT supported ‘Catharsis’. ‘Presence Of Mind’ is a sparkling trance gem littered with emotion and pure power! As with any Amir Hussain production the full fire energy begins at beat one with a chunky kick / perc combo, backed by a rousing arp bassline. Cool female vocal chops fuse with a pumping acid line to lead into a breakdown where sweet, serene vocals provide the backdrop to a stunning pure trance melody!
{ "pile_set_name": "Pile-CC" }
Use the Purdue tools below to calculate your costs and then check out the scholarships available through the university and our college. There are also lots of opportunities to work, which can help cover the cost of your degree.
{ "pile_set_name": "Pile-CC" }
Q: How do I parse an RSA public key in Go? Why do I get the error ssh: short read from the following code: package main import ( "golang.org/x/crypto/ssh" "testing" ) func TestPublicKeyParsing(t *testing.T) { key := "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDhZgLqiZYDKCWhyi2gUXIRwIPyMSyXZ6yrwsm3PYfIvFB60kVlNgqDpPVhWoH6eRfaQ1y/xbg4nClZmHEDTvLbTQ1ZoQzzjZ7zvM6aQ4nADmKcCYswEuU94axouVjsHNyMLfOkPXuGec0fChwQ2JDh/B9LCiSDxyhCOgHvETXGXsyBMKjn498iPjJ6snzk35dy5wPZRz41g3dLaygF+wYAT791u/JchHQL7OP7RoNgby+RM16SYZs1tgQVkfU//o+AyTarWYLVDpFU6HPPenE4xEXhbgqd7x3wSNPBsMvY8Zjcu3kdHtboJidyMtKeD8ghV/T24kME58TW15T8Eg8R" _, err := ssh.ParsePublicKey([]byte(key)) if err != nil { t.Errorf("ERROR! %s", err) } } Is the key string in the wrong format? What is the correct format for the public key? A: This looks like the authorized_keys format, which you can parse with ssh.ParseAuthorizedKey: package main import ( "golang.org/x/crypto/ssh" "testing" ) func TestPublicKeyParsing(t *testing.T) { key := "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDhZgLqiZYDKCWhyi2gUXIRwIPyMSyXZ6yrwsm3PYfIvFB60kVlNgqDpPVhWoH6eRfaQ1y/xbg4nClZmHEDTvLbTQ1ZoQzzjZ7zvM6aQ4nADmKcCYswEuU94axouVjsHNyMLfOkPXuGec0fChwQ2JDh/B9LCiSDxyhCOgHvETXGXsyBMKjn498iPjJ6snzk35dy5wPZRz41g3dLaygF+wYAT791u/JchHQL7OP7RoNgby+RM16SYZs1tgQVkfU//o+AyTarWYLVDpFU6HPPenE4xEXhbgqd7x3wSNPBsMvY8Zjcu3kdHtboJidyMtKeD8ghV/T24kME58TW15T8Eg8R" parsedKey, _, _, _, err := ssh.ParseAuthorizedKey([]byte(key)) if err != nil { t.Errorf("ERROR! %s", err) } }
{ "pile_set_name": "StackExchange" }
Q: How to get a nice API Plot in Python? I have an algorithm which automatically plots data on the plot area of my GUI. There are the codes I use to do so: self.figure = Figure(figsize=(5,4), dpi=100) self.graph = self.figure.add_subplot(111) self.canvas = FigureCanvasTkAgg(self.figure, master=self.root) self.canvas.show() self.canvas.get_tk_widget().pack(side=LEFT, fill=BOTH, expand=1) self.canvas._tkcanvas.pack(fill=BOTH, expand=1) And here is how I display the relevant points onto the plot area: if eigenvalue > self.eigenvalueThreshold: if self.previousEigenvalue <= self.eigenvalueThreshold: main.graph.scatter(windowNumber, eigenvalue, c="red") else: main.graph.scatter(windowNumber, eigenvalue, c="yellow") else: main.graph.scatter(windowNumber, eigenvalue, c="blue") self.previousEigenvalue = eigenvalue main.canvas.show() I get something like this, where the zoom is automatically done in function of the weight of the set of data points: I would like to know if there is any chance to plot my data like this: Do you have any idea? A: For what you're wanting, it's easier to use fill_between, rather than fill. If you use fill, you'll have to worry about adding verticies to create a closed polygon. fill_between automatically fills the area between the data and a constant value (0, by default) or another curve. For example: import matplotlib.pyplot as plt import numpy as np # Generate some data to plot... x = np.arange(1000) y = np.random.normal(0, 1, y.size).cumsum() y[y < 0] *= -1 fig, ax = plt.subplots() # Fill the region between your curve and 0 with red... ax.fill_between(x, y, facecolor='red', edgecolor='none') # Optional... ax.grid(True, color='white', linewidth=2, linestyle='-') ax.set(axisbelow=True, axis_bgcolor='lightgray') ax.tick_params(direction='out') plt.show()
{ "pile_set_name": "StackExchange" }
Introduction {#Sec1} ============ Alzheimer's disease (AD) is a dysfunction of central nervous system (CNS) and causes dementia, in which patients lose their memory^[@CR1]^. AD results in neuronal deterioration, given rise to memory and cognitive impairment as well as disturbance in daily activities. As a result, the patients thinking ability is disrupted, leading to psychiatric disorder^[@CR1]^. In the last stages of the disease, patients require 24-h care^[@CR2]^. Today, cholinergic hypothesis constitutes the basis for the development of most common therapeutics for AD treatment. According to this hypothesis, reducing the production of acetylcholine, as a neurotransmitter, leads to AD. Therefore, cholinesterase inhibitors (ChEIs) can be considered as potential targets for the treatment of AD^[@CR3]^. Donepezil HCl (DH) is one of the ChEIs developed for AD treatment^[@CR4]^. DH, as a specific and reversible ChEI, increases the acetylcholine concentration in the cholinergic synapses^[@CR4]^. Despite its therapeutic effects, it has various side effects, such as nausea and diarrhea, after oral administration^[@CR5]^. Also, drug delivery to AD is limited, due to the blood brain barrier (BBB), restricting the brain penetration of 98 and 100% of small and large molecule drugs, respectively^[@CR6]^. To overcome these issues, hydrogel-based drug delivery systems and drug delivery through nasal route seem to be appropriate. Nasal mucosa is characterized by high vascularization, large absorptive surface area, and the rapid onset of action^[@CR4]^. Moreover, this route is an alternative, non-invasive, and painless technique for circumventing the BBB and delivering therapeutics to CNS^[@CR4]^. Also, hydrogels are cross-linked 3-dimensional (3D) networks, biocompatible with biological systems, and able to absorb the high content of water^[@CR7]^. Most hydrogels are bio- and muco-adhesive^[@CR8]^ and improve drugs bioavailability more than conventional oral drug delivery systems through increasing the contact time between the drug and absorption site^[@CR9]^. Therefore, hydrogels appear promising for brain drug delivery through nasal route^[@CR10]^. In this study, various hydrogels formulations were designed, and after characterization, thiolated chitosan hydrogel (TCH) was selected for incorporation with liposomal DH (LDH) for the intranasal delivery of DH as an anti-Alzheimer drug in rabbits. In this regard, various pharmacokinetic parameters, including the mean peak drug concentration (C~max~), time to reach C~max~ (T~max~), and area under the curve (AUC) of the drug were evaluated. LDH incorporated into TCH could considerably contribute to the development of efficacious system for DH brain delivery, due to its promising properties. This formulation is a novel approach, reported for the first time for the brain delivery of DH which might result in the development of an innovative therapy for AD treatment. Results and Discussion {#Sec2} ====================== Characterization of hydrogels {#Sec3} ----------------------------- ### Drug loading efficiency {#Sec4} Drug loading efficiency is critical for polymer carriers as low drug loading increases the drug cost, decreases therapeutic efficacy, and causes inappropriate release profiles, leading to the limited efficiency of drug delivery systems^[@CR11]^. Therefore, improving the drug loading efficiency is important to obtain the efficient drug action. In the present study, the results showed high drug loading efficiency, confirming the potency of the method used for synthesizing DH-loaded hydrogels, in which drug entrapment efficiency for the hydrogels ranged from 79 to 95%. ### Gel fraction {#Sec5} Gel fraction is the weight ratio of dry gel before and after swelling^[@CR12]^. It indicates the covalent crosslinking of polymer chains^[@CR13]^ and is inversely associated with the toughness of a polymeric network^[@CR14]^. The poor mechanical properties of PVP limits its use for biomedical application, and its mixing with other polymers enhances its mechanical characteristics and usage as biomedical materials^[@CR15]^. In the present study, preparing the PVP hydrogels in aqueous polymer solution and cross-linking them using gamma ray was found as a simple and effective procedure. Gel fraction data presented in Table [1](#MOESM1){ref-type="media"} (Supplementary Information) and Fig. [1](#Fig1){ref-type="fig"} indicated that gel fraction was generally decreased with increasing PEG concentration and decreasing radiation dose. This results from the fact that PEG functions not only as a plasticizer, but also it decreases the cross-linking degree and maintains the gelation process between PVP chains, resulting in decreasing the PVP cross-linking and further avoiding their physical interactions^[@CR16]^. As PEG is an alcohol, it can function as a radical scavenger, and as a result, it can be used as an enhancer or a reducer of the hydrogel gelation based on the radiation dose. This effect can be reversed by increasing the radiation dose, which causes further cross-linking network chains and increasing the gel content^[@CR17]^. Based on these facts, the difference between gel fraction values of the hydrogels prepared from only PVP (3, 4 and 6%) and hydrogels containing PEG 3% were statistically significant (P \< 0.05). These results were in agreement with the results of El-Mohdy *et al*.^[@CR17]^ and Kim *et al*.^[@CR18]^ studies; however, in the present study, chitosan and TCH showed comparable gel fraction values (65 and 62% for chitosan and TCH, respectively). The degree of polymer cross-linking is inversely proportional with the degree of swelling^[@CR19]^, and as the gel fraction increases, the lower swelling degree is achieved^[@CR20]^.Figure 1Effect of radiation dose on the hydrogel fraction in different formulations: (**A**) PVP 3% gels, (**B**) PVP 4% gels and (**C**) PVP 6% gels. ### Degree of swelling {#Sec6} It has been found that cross-linking density is inversely associated with the swelling degree^[@CR21]^. Also, the extensibility and the elastic modulus are decreased as the swelling degree increases^[@CR22]^. Moreover, a direct relationship between PEG content and the swelling degree of hydrogels has been reported as PEG blocks are hydrophilic and drive swelling^[@CR23]^. The results of swelling degree in the present study shown in Table [2](#MOESM1){ref-type="media"} (Supplementary Information) and Fig. [2](#Fig2){ref-type="fig"} indicated that the swelling was generally increased by increasing PEG concentration and decreasing radiation dose. In other words, the amounts of swelling degree were inversely proportional with cross-linking density. Although these rules were not definite in the present study, the prepared hydrogels were generally followed these principles which were consistent with the results of previous studies^[@CR15],[@CR23]^. Also, the swelling degree in thiolated chitosan (19.8%) was less than chitosan (21%) hydrogels, indicating that TCH had higher extensibility and elastic modulus than chitosan hydrogel.Figure 2Effect of different radiation dose on the swelling degree in different formulations: (**A**) PVP 3% gels, (**B**) PVP 4% gels and (**C**) PVP 6% gels. The most common technique used for prolonging the residence time of topical drugs at the application site is to enhance formulations viscosity using polymers. Therefore, in a polymeric network, rigidity and elasticity depend upon rheological properties^[@CR24]^. ### Evaluation of the hydrogels rheological properties {#Sec7} Rheological measurements afford a quantitative characterization of the visco-elastic characteristics of a substance under the defined conditions^[@CR25]^. These measurements provide valuable information about the behavior and probable behavior of various products, in terms of their viscosity and elasticity^[@CR26]^. Flow behavior is an indirect measurement of product consistency and quality and is associated with several parameters, such as molecular weight and its distribution. In the current study, the results of rheological measurements indicated an inverse relationship between viscosity and shear rate, indicating shear thinning behavior of the samples (Figs [3](#Fig3){ref-type="fig"} and [4](#Fig4){ref-type="fig"}). However, in the hydrogel prepared from PVP 6% and PEG 2%, there was an inverse phenomenon, in which the viscosity was increased by increasing the shear rate over time, known as rheopexy, as shown in Table [3](#MOESM1){ref-type="media"} (Supplementary Information) and Fig. [3](#Fig3){ref-type="fig"}. Decreasing the viscosity of formulations under shear stress in a time-dependent manner is known as thixotropy. Also, the results showed that the shear thinning behavior was more obvious in those hydrogels prepared from PVP 4% compared to those prepared from PVP 6%. Moreover, TCH compared to chitosan showed less viscosity, and it was observed that TCH elasticity had a correlation with the polymer-linked thiol groups as shown in Table [3](#MOESM1){ref-type="media"} (Supplementary Information) and Figs [3](#Fig3){ref-type="fig"} and [4](#Fig4){ref-type="fig"}^[@CR27]^. Therefore, TCH derivatives could be considered as appropriate excipients for the preparation of liquid and semiliquid formulations, which could be stable on the site of drug application, such as mucous membranes^[@CR24]^.Figure 3Flow properties of the various hydrogels.Figure 4Flow curves of the various hydrogels. ### Mucoadhesion measurement of hydrogels {#Sec8} Contact time has a key role in bioadhesion as a sufficient contact time provides adequate hydration and swelling of the polymer and enhances the interpenetration of moisture and the formation of non-covalent interaction, resulting in the promotion of mucoadhesion^[@CR28]^. Among the different instrumental variables influencing the mucoadhesive properties of polymers, contact force and time, and the removal speed of the probe from the mucosal tissue are the most efficient factors. Chain flexibility is the most important physical mechanism of mucoadhesion, where flexible polymer chains intend to interpenetrate between polymer chains and mucus to a suitable depth for developing a robust adhesive bond^[@CR29]^. Also, diffusion theory explains the interpenetration of polymer and mucin (a glycoprotein in epithelial lining of respiratory tract) chains to an adequate depth to develop a semi-permanent adhesive bond. Adhesion force is enhanced with increasing the penetration degree of the polymeric chains^[@CR25]^. This penetration rate is determined by the factors of diffusion coefficient, mobility, flexibility and the essence of the mucoadhesive chains and contact time^[@CR7]^. According to the previous finding^[@CR30]^, the interpenetration depth needed to develop an efficient bioadhesive bond was in the range of 0.2--0.5 μm. In the present study, Tables [4](#MOESM1){ref-type="media"} and [5](#MOESM1){ref-type="media"} (Supplementary Information) and Fig. [5](#Fig5){ref-type="fig"} illustrate the impact of contact time on the detachment force. It was found that the detachment force was increased with increasing the contact time. This was in consistent with the results obtained by Wong *et al*.^[@CR31]^. The results, also, showed that TCH and the hydrogel prepared from PVP 6% and PEG 2% had the highest and lowest mucoadhesive intensity, respectively. Although the hydrogel prepared from PVP 4% and PEG 2% showed the highest force of mucoadhesion among PVP hydrogels, it had poor flexibility, while PVP 4% and PEG 3% hydrogel had less mucoadhesive properties but with better flexibility (Tables [4](#MOESM1){ref-type="media"} and [5](#MOESM1){ref-type="media"} (Supplementary Information), Fig. [5](#Fig5){ref-type="fig"}). Furthermore, it was found that, in all formulations, detachment force was decreased by increasing PVP concentration, while the force was increased with increasing PEG concentration. This finding was related to the PVP concentration as increasing PVP concentration causes the hydrogels to become more rigid, while PEG increases the elasticity and the detachment force. Cationic polymers, such as chitosan, have a good bioadhesive property and swelling in contact with nasal mucosa^[@CR32]^. TCH constitutes disulfide bonds with mucus glycoproteins through reduced thiol groups, which are available on the chitosan backbone. The presence of disulfide bonds inter- and intra- molecularly makes chitosan as a highly stable drug carrier^[@CR30]^. The formation of disulphide bonds between TCH and cysteine rich sub domains of mucus glycoproteins will augment the mucoadhesiveness properties of TCH^[@CR33]^. The chitosan cross-linking depends upon the accessibility of the cationic sites and the negatively charged species^[@CR34]^.Figure 5Effects of contact time on the adhesion force of different formulations: (**A**) PVP 3% gels; (**B**) PVP 4% gels; (**C**) PVP 6% gels; and (**D**) chitosan and thiolated chitosan gels. *In vitro* and *ex-vivo* drug release from hydrogels {#Sec9} ---------------------------------------------------- Drug delivery systems with controlled release properties are a new approach for the treatment of different diseases. In these systems, interactions between the drug, polymers and the used medium are the key factors which determine the release rate magnitude and release order^[@CR35]^. The ability of a drug delivery system to extend the drug release in a controlled manner is defined as controlled release. In this regard, brain diseases are specially good candidate for controlled release methods, due to the presence of BBB^[@CR36]^. Figure [6](#Fig6){ref-type="fig"} and Table [1](#Tab1){ref-type="table"} show DH release from different hydrogels formulations. The formulation contains PVP 4% and PEG 3% hydrogel showed the highest release rate (98.25%) of the drug among PVP hydrogels. These hydrogels were prepared by cross-linking at the radiation dose of 20 KGy as the efficient dose. It was identified that increasing the PVP content results in a decrease in the drug release rate, owning to the enhancement of the cross-linking density. In addition, with the increase of PVP concentration, the network formation is enhanced, and the elasticity and flexibility are decreased (Fig. [6](#Fig6){ref-type="fig"}). In contrast, PEG incorporation into hydrogels reduces the cross-linking density, network formation, and consequently increases the elasticity and flexibility, resulting in the enhanced release rate of drug. PEG is a hygroscopic molecule and absorbs water molecules from the release medium, leading to increasing the dissolving medium for the drug, reducing the hydrogel viscosity and increasing the drug release. By increasing the polymer concentration, the viscosity of hydrogel also increases, resulting in reducing the drug release. Ko *et al*.^[@CR37]^ study showed the relationship between the profile of drug release and viscosity of chitosan solution. Also, the current study showed that TCH had higher drug release rate compared to chitosan and hydrogels prepared from PVP 3% and PEG 3%. This is due to the fact that chitosan functions as permeation enhancer, and this effect intensifies in the presence of thiol groups^[@CR38]^. Moreover, chitosan can increase the paracellular absorption route, which has a key role in the transmission of hydrophilic therapeutics across the membrane^[@CR32]^. Table [1](#Tab1){ref-type="table"} demonstrates the accumulative release profiles of DH from hydrogels. The formulations (PVP 6%, PVP 6% and PEG 3% hydrogels) seem to be fit with Korsmeyer-Peppas mathematical model (r \> 0.999). These findings were in agreements with previous studies' results^[@CR39],[@CR40]^. In conclusion, these findings indicated that the prepared hydrogels had the controlled drug release pattern, making them suitable for biomedical applications.Figure 6DH release profile from different formulations: (**A**) PVP 3% gels; (**B**) PVP 4% gels; (**C**) PVP 6% gels; and (**D**) chitosan, thiolated chitosan, thiolated chitosan *ex vivo*, PVP 4% + PEG 3%, and PVP 4% + PEG 3% *ex vivo*.Table 1Release rate constants and correlation coefficients (r) obtained after fitting various mathematical models into the release profile of DH from hydrogels formulations.Gel typeZero-order^[@CR83]^ Q~t~ = Q~0~ + K~0~tHiguchi model^[@CR83]^ F~t~ = Q = K~H~ × t^1/2^First-order^[@CR83]^ LogC = logC~0~ − Kt/2.303Korsmeyer-Peppas^[@CR83]^ Mt/M = Kt^n^Hixson-Crowell^[@CR83]^ W0^1/3^ − Wt^1/3^ = ktKr^2^Kr^2^Kr^2^NKr^2^Kr^2^PVP 3%0.0360.98300.5500.93230.000.98110.920.0060.98530.0000.9817PVP 3% + PEG 1%0.300.96420.4440.87630.0000.96030.910.0080.99600.000.9616PVP 3% + PEG 2%0.1330.98365.8280.95710.0020.96740.900.0560.99730.0010.9729PVP 3% + PEG 3%0.1580.99032.4220.92710.0020.97350.90.0940.99700.0010.9795PVP 4%0.0210.89900.3020.78580.0000.89480.970.0010.99520.000.8962PVP 4% + PEG 1%0.0460.80520.6630.68940.0000.79380.990.0000.98780.0000.8962PVP 4% + PEG 2%0.1220.96461.8310.87390.0010.94440.980.0250.99790.0000.9511PVP 4% + PEG 3%0.2460.98833.7290.91440.0040.95080.970.1150.99560.0010.9636PVP 6%0.0110.98670.1710.91100.0000.98570.960.0020.99940.0000.9861PVP 6% + PEG 1%0.0090.83570.1420.85230.0000.84680.920.0070.86050.0000.8464PVP 6% + PEG 2%0.0210.97350.3140.88910.0000.97130.950.0030.99760.0000.9717PVP 6% + PEG 3%0.0280.98390.4270.90410.0000.98120.960.0020.99930.0010.9821Chitosan0.1020.94451.5190.84820.0010.92630.920.0070.99260.0000.9323Thiolated chitosan0.3230.99234.9430.92650.0100.93200.950.1320.99910.0020.9498 Histopathological effects of the hydrogels {#Sec10} ------------------------------------------ Drug delivery systems must be safe for human clinical use^[@CR41]^. In this regard, histopathological studies are commonly used for the toxicity assessment of formulations. Olfactory mucosa in sheep similar to other mammalians, such as human, constitutes of the olfactory epithelium and basal lamina propria. The olfactory epithelium by itself constitutes of basal, receptor and supporting cells, while Bowman's glands in lamina propria are limited to the superficial part of the propria^[@CR42]^. The findings of the present study showed that PVP hydrogels without additives led to a slight histological change in the olfactory mucosa (Fig. [7](#Fig7){ref-type="fig"}). There was hypotrophy in the Bowman's gland and cellular infiltration in the connective tissue lamina propria surrounding the gland compared to the control group, normal respiratory epithelium with preserved cilia. The olfactory epithelium appeared to be not affected. Hydrogel containing PVP 4% and PEG 3% led to substantial histological changes in the mucosa of the olfactory epithelium, in which degeneration and sloughing of the epithelium surface were observed with ulcers in some regions of the epithelium. Degeneration and cellular infiltrations in the Bowman's gland, dilation and congestion of the blood vessels were also observed. Other changes were apoptotic nuclei in the connective tissue lamina propria and Bowman's gland. In the case of chitosan hydrogel (Fig. [7C](#Fig7){ref-type="fig"}), a marked cellular infiltration in the connective tissue lamina propria, apoptosis, and sloughing of the epithelium surface were observed associated with stratification of the epithelium (hyperplasia) in some regions. However, TCH (Fig. [7B](#Fig7){ref-type="fig"}) did not cause significant histological changes, in which the olfactory mucosa seemed to be similar to that of the control group with minor histological alterations. In this case, few degenerative changes in the epithelium surface and in the Bowman's gland in some regions of the olfactory mucosa were observed. The ability of mucoadhesive polymers to interact with the mucus layer makes them suitable for incorporating with liposomes to inhibit the liposome clearance and increase the absorption of drugs loaded into liposomes^[@CR43]^.Figure 7Histopathological photomicrograph of (**A**) the normal sheep nasal mucosa (control) (magnification size × 20); (**B**) nasal mucosa after treatment with hydrogels without additives. Hypotrophy in the Bowman's gland and cellular infiltration in the connective tissue lamina propria surrounding the gland are perceivable; (**C**) nasal mucosa after treatment with the PVP + PEG hydrogels; (**D**) histological characterization of nasal mucosa after using chitosan hydrogel; and (**E**) nasal mucosa after treatment with thiolated chitosan hydrogel (magnification size × 10). Characterization of liposomes {#Sec11} ----------------------------- Liposome is a small vesicle with the composition identical to the cell membrane^[@CR44]^. Reverse-rotary evaporation, as a simple operation, is used for producing high envelopment monolayer liposome^[@CR45]^. In the present investigation, liposomes were prepared by two methods, and both were based on the reverse phase evaporation technique. In the first method, a short time sonication was used to prepare a water/oil emulsion from a two-phase system containing phospholipids in chloroform and phosphate buffer saline (PBS) containing DH. The liposomes were formed by removing the residual solvent using continuous rotary evaporation under reduced pressure, while in the second method, the hydrogel was subjected to the strong vortex to convert the hydrogel into suspension. The liposomal size influences the drug encapsulation efficiency in liposomes^[@CR46]^. The results of various studies showed that using reverse phase evaporation technique led to vesicles with the mean diameter of 200--500 nm^[@CR47],[@CR48]^. The vesicles size prepared by this method relies on the lipid composition and solvent. In this study, the size of the prepared liposomes was 438.7 ± 28.3 nm. Also, the results of SEM imaging (Fig. [8](#Fig8){ref-type="fig"}) showed that the LDH particles were prepared as large unilamellar vesicles with both oval and spherical shapes. In addition, the drug entrapment efficiency of prepared liposomes using the first and second methods was found to be 38% ± 0.7 and 62.5% ± 0.6~~4~~, respectively. The second method is widely used to prepare liposomes since this method helps to increase drug encapsulation efficiency to 77--79%^[@CR49]^.Figure 8(**A**) SEM image of liposomes prepared by first reversible evaporator phase method; (**B**) SEM image of liposomes prepared by second reversible phase method. *In vitro* drug release from LDH incorporated into TCH {#Sec12} ------------------------------------------------------ DH is freely soluble in water and can produce effective complexation, due to the desired ionization power^[@CR50]^. The factors of hydrogel composition (type of polymer, drug and additives), geometry (size and shape)^[@CR51]^, preparation method (*in-situ* drug loading and post loading methods)^[@CR52]^, and environmental conditions during drug release determine the mechanism of release^[@CR51]^. Drug loading method and physicochemical properties of the lipid membrane are the most important factors, influencing drug release rate from liposomes^[@CR53]^. In the current study, the drug release from LDH incorporated into TCH was less than LDH as positive control (Fig. [9](#Fig9){ref-type="fig"}). The higher release rate of DH from liposomes compared to liposomes incorporated into TCH might attribute to the retarded release from hydrogel^[@CR54]^. As Fig. [9](#Fig9){ref-type="fig"} shows, drug release from liposomes incorporated into TCH was higher (94.7%) than the most hydrogels, which might be resulted from the polymer linkage. This suggested some defects occurred in the bilayer liposomes, due to applied forces on the phospholipid head-groups by intermolecular crosslinking^[@CR55]^. Therefore, interactions between thiolated polymer matrix and LDH do not affect the controlled and sustained release profile^[@CR55]^. The release profile of DH from liposome incorporated into TCH revealed best fit with Korsmeyer-Peppas model (Table [2](#Tab2){ref-type="table"})^[@CR56]^.Figure 9The profile of DH release from various liposomal hydrogels.Table 2Release rate constants (K) and correlation coefficients (r) obtained after fitting various mathematical models into the release profile of LDH incorporated into TCH.Mathematical modelsZero-order F = K~0~\*tHiguchi model *F* = *KH*\**t* ^*ʌ*^*0.5*First-order F = 100\* \[1− Exp (−K1\* t)\]Korsmeyer-Peppas F = kKP \*t^ʌ^nHixson-Crowell F = 100\*\[1 − (1 − kHC\*t)^ʌ^3\]Hydrogel typeKrKrKrKrKrThiolated chitosan0.3220.99174.9310.92590.0080.93060.1520.99170.99980.9988 Stability study {#Sec13} --------------- Stability is one of the important factors in the design and development of formulations. The assessment of size distribution is important to study the physical or colloidal stability of a formulation under storage conditions and in a biological medium^[@CR57]^. The physical stability of liposomes could be studied by measuring the size of liposomes which is increased, owing to their aggregation and or fusion. The physical stability of liposomes can be increased by enhancing the amount of liposomes charge (positive or negative)^[@CR58]^. Also, it has been found that the surface charge density of liposomes influences their distribution in an *in vivo* environment^[@CR59]^. Drug release from liposome and change in liposome size could occur, due to those physical processes affecting the shelf life of liposomes, such as aggregation/flocculation and fusion/coalescence. Often a colloidal dispersion is thermodynamically unstable^[@CR60]^. Liposomal formulations which are physically stable preserve size distribution and drug entrapment efficiency. Such stability is determined by different factors, such as thermodynamics and colloidal properties of the system^[@CR54]^. The size of particles for nasal delivery should be above 10 µm because the smaller particles (\<10 µm) can be transferred to the tracheobronchial with the airstream, while larger particles will mostly deposit in the nose. Therefore, the size range of 40--60 µm is appropriate for nasal delivery^[@CR61]^. The size of particles prepared in the present study ranged from 45--58 µm. The storage results of the current study for various formulations were indicated in Figs [10](#Fig10){ref-type="fig"}--[13](#Fig13){ref-type="fig"}. It was found that the morphology of liposome was affected by the temperature, where it was changed remarkably at 20 °C, while it was approximately stable at 4 °C. It was perceived that liposomes incorporated into TCH were considerably more stable than the unincorporated ones (Fig. [13](#Fig13){ref-type="fig"}).Figure 10SEM images of LDH incorporated into TCH during stability study, (**A1**) liposomes stored at 4 °C for one week; (**A2**) liposomes stored at 4 °C for 2 weeks; (**A3**) liposomes stored at 4 °C for 3 weeks; and (**A4**) liposomes stored at 4 °C for 4 weeks.Figure 11SEM images of LDH incorporated into TCH during stability study, (**A1**) liposomes stored at 20 °C for one week; (**A2**) liposomes stored at 20 °C for 2 weeks; (**A3**) liposomes stored at 20 °C for 3 weeks; and (**A4**) liposomes stored at 20 °C for 4 weeks.Figure 12SEM images of LDH during stability study, (**B1**) liposomes stored at 4 °C for one week; (**B2**) liposomes stored at 4 °C for 2 weeks; (**B3**) liposomes stored at 4 °C for 3 weeks; and (**B4**) liposomes stored at 4 °C for 4 weeks.Figure 13SEM images of LDH during stability study, (**B1**) liposomes stored at 20 °C for one week; (**B2**) liposomes stored at 4 °C for 2 weeks; (**B3**) liposomes stored at 20 °C for 3 weeks; and (**B4**) liposomes stored at 20 °C for 4 weeks. Chromatographic system and conditions {#Sec14} ------------------------------------- In the present study, the blank plasma (Fig. [14A](#Fig14){ref-type="fig"}) in comparison to the spiked plasma samples did not show significant interfering endogenous peaks at the retention times corresponded to DH (Fig. [14B1,C1](#Fig14){ref-type="fig"}) and IS (Fig. [14B2,C2](#Fig14){ref-type="fig"}), indicating the specificity of the assay. The chromatographic behavior of QC samples was similar to that of actual plasma samples (Fig. [14B1 vs. B2,C1 vs. C2](#Fig14){ref-type="fig"}).Figure 14(**A**) Chromatogram of blank plasma; (**B**) UPLC MS/MS chromatogram of extracted of spiked plasma with 1 ng DH and 40 ng IS; (**C**) UPLC MS/MS chromatogram of the extracted of DH 12 h after intranasal administration to rabbits and multiple reaction-monitoring (MRM) transitions of IS. The chromatograms resulted from plasma sample containing DH and IS collected 12 h after nasal administration of LDH incorporated into TCH to the rabbits are shown in Fig. [14C](#Fig14){ref-type="fig"}. The peaks corresponded to DH and IS were distinct with the retention times of approximately 1 min and 2.6 min, respectively. The chromatographic run time of 3.5 min was sufficient for sample analysis. To evaluate the linearity of the method, calibration curves were constructed using the six series of plasma samples containing 1--50 ng/mL of DH. The linear relationship (Y = 0.013 + 0.251X) was obtained between the peak area ratio of DH to that of IS versus the corresponding concentration of DH (with correlation coefficient (r) of \>0.992 for all tested standard calibration curves). The validation of the calibration curve linearity was confirmed using high value of the correlation coefficient. The precision values for intra- and inter-day were \<16.5% and the accuracy was \<17.6 for DH concentrations^[@CR62]^. Pharmacokinetic study {#Sec15} --------------------- After validating of the proposed bio-analytical method, it was successfully used to measure the plasma DH concentration for pharmacokinetic study in the New Zealand white rabbits. Aside from the economic reasons, rabbits are common laboratory animals because of their size, temperature and ability to produce genetically homogeneous strains^[@CR63]^. In the current study, New Zealand white rabbits, as a well-controlled animal model, were used to evaluate the absorption potency of formulations from nasal route^[@CR64]^. Time profiles of DH plasma concentration (mean ± SD) in the rabbits after oral administrations of 5 mg of DH tablet and 5 mg of nasal LDH incorporated into TCH were shown in Fig. [15](#Fig15){ref-type="fig"}. The pharmacokinetics parameters for the tablet and LDH incorporated into TCH after oral and intranasal administration of DH are summarized in Table [3](#Tab3){ref-type="table"}. It was found that the maximum plasma concentration of DH was achieved 1 h after the intranasal administration of LDH incorporated into TCH and 1.7 h after oral tablets administration. C~max~ was 12 ± 1.3 ng/mL and 8.2 ± 1.4 ng/mL for intranasal administration of LDH incorporated into TCH and oral administration of DH tablets, respectively, indicating the potency of liposome incorporated into TCH to increase C~max~ through nasal route by 46%. Moreover, intranasal formulation increased the AUC by 39% compared to the oral tablets of DH. Overall, the results of DH measurement in the blood and brain proved that the intranasal DH delivery through LDH incorporated into TCH caused higher DH concentration in the blood circulation and brain compared to the oral tablets of DH. The results showed that the mean brain content of DH was \>2-fold through intranasal administration of LDH incorporated into TCH compared to the oral tablets. This confirmed that the intranasal LDH incorporated into TCH is superior for DH delivery to the rabbit brain compared to the oral tablets.Figure 15DH plasma concentration time profiles in rabbits after intranasal administration of LDH incorporated into TCH (n = 6) and oral administration of oral DH tablet (n = 6).Table 3Pharmacokinetics parameters for the tablet and intranasal LDH incorporated into TCH in rabbit plasma and brain content.Rabbit numberIntranasalTabletProperties1234Mean ± SD123Mean ± SDDose (mg)0.40.40.40.40.4 ± 0.00.40.40.40.4 ± 0.0AUC~(0-t)~, (mg/l)h15.133.416.312.319.3 ± 2.116.411.413.913.9 ± 2.5C~max~, (ng/ml)15.215.410.96.312.0 ± 1.37.46.410.98.2 ± 1.4T~max~, (h)1.01.01.01.01.0 ± 0.02.01.02.01.7 ± 0.2T~1/2~, (h)8.41.17.46.25.8 ± 0.78.49.55.37.7 ± 2.2MRT, (h)17.55.414.214.713.0 ± 1.415.920.712.616.4 ± 1.5CI/F, (L/h)/kg3.133.74.53.6 ± 0.43.23.14.63.6 ± 0.7Brain content, (ng/g)29.152.834.433.537.5 ± 3.54.716.633.118.1 ± 1.8^\*^MRT: mean residence time. Conclusion {#Sec16} ========== In the current study, the synthesis of different DH hydrogels was reported for intranasal DH delivery. They were synthesized using gamma radiation and different natural and synthetic polymers. After *in vitro* characterization, TCH was selected for incorporating with LDH and subsequent pharmacokinetic evaluation in rabbits. The formulation was administered through nasal route. The *in vivo* results showed that LDH incorporated into TCH significantly increased the blood concentration and the brain content of DH compared to the oral tablets of DH. In conclusion, this study suggests further investigation of LDH incorporated into TCH in the animal model of Alzheimer disease. Materials and Methods {#Sec17} ===================== Materials {#Sec18} --------- DH was provided from Jai Radhe Sales (Gujarat, India). Chitosan (103.2 g/mol, degree of deacetylation 76.6%), 2-imino thiolane hydrochloride, 2-mercaptoethanol, triethanolamin reagent grade, formic acid (ultra-performance liquid chromatography MS/MS (UPLC MS/MS) grade), clopidogrel (as internal standard (IS)), ammonium formate, and acetonitrile were purchased from Sigma Aldrich Company (St. Luis, MO, USA). Cholesterol (CHOL) and dipalmitoylphosphocholine (DPPC) were purchased from Fisher Scientific Company (Fair Lawn, NJ, USA) and Avanti Polar Lipid Inc., (Birmingham, AL, USA), respectively. Polyvinyl pyrolidone (PVP), known as Luviskol® K-9, was purchased from BASF (Aktien gesellschaft Ludwig Shafen, Germany). Polyethylene glycol 600 (PEG600) was obtained from Merck Schuchardt (Hohenbrunn, Germany). Lactic acid glutaraldehyde 50% solution and chloroform were purchased from BDHL laboratory Supplies (PooleBH151TD, England). Mono basic potassium dihydrogen phosphate and cellophane membrane (Spectra/Por® Dialysis Tubing from regenerated cellulose) with cut-off 12--14000 Da were purchased from panreac Quimica, SA, (Barcelona, Espana) and Spectrum Laboratories (Houston, TX), respectively. All chemicals used were of analytical grade and were used as received. *In vitro* study equipment {#Sec19} -------------------------- Drug diffusion from different formulations was evaluated using a vertical jacketed 15 mL Franz Cell with 12 mL receptor volume provided by PermeGear Inc. (3575 North Drive, Bethlehem, PA 18015, Ref n.: \#6G-01-00-20-15). Rotavapor R-210 from Büchi Labortechnik AG, Flawil, Switzerland was used for liposome preparation. Morphology of the formulations was evaluated using scanning electron microscopy (SEM) (JSM-6060; JEOL, Tokyo, Japan). The mean particle size of the formulations was determined by dynamic light scattering (DLS) using 90 Plus particle size analyzer (Laser nanoparticle size analyzer model no. 90 Plus- software version 3.74, Brookhaven Instruments, Holtsville, New York, USA). Milli-Q-Plus system (Millipore systems®, France) was used for preparing de-ionized water. Preparation of donepezil HCl-loaded PVP hydrogels {#Sec20} ------------------------------------------------- PVP hydrogel was synthesized based on the method described by Abd El-Mohdy *et al*.^[@CR17]^ with some modifications. Briefly, PVP (3, 4, 6% w/w) with (1, 2, 3% w/w) and without PEG were prepared in 100 mL of deionized water, and the samples were stirred (5 min, 30 RPM). The solutions were transferred in glass bottles, and nitrogen was bubbled in the solutions to remove oxygen. Next, the bottles were sealed and irradiated at room temperature by gamma radiation produced from Cobalt-60 source with the doses of 15, 20, 25, 30 KGy. The products were fully sterile permanent hydrogels. The radiation time was measured according to the radiation source dosing rate (51.7 Gy/min). DH was also homogeneously dispersed in the hydrogels after crosslinking at the final drug concentration of 20 mg/g. The pH value of the prepared hydrogels was adjusted at 6.4 ± 0.15 by triethanolamin. The blank hydrogels were prepared without adding the drug. Preparation of donepezil-loaded chitosan hydrogel {#Sec21} ------------------------------------------------- The chitosan hydrogel was prepared according to Chen *et al*.^[@CR65]^ study. For this purpose, 2 g chitosan were added into acetic acid solution (100 mL, 0.1 M) and stirred (room temperature, 150 RPM) to achieve a homogeneous solution. Chitosan contains -NH~2~ amino group which can interact with glutaraldehyde as a cross-linking agent^[@CR66]^. The cross-linking reaction between chitosan and glutaraldehyde was carried out at room temperature for 4 h. The pH value of hydrogel was adjusted to nasal (pH 5.5 ± 0.15). Also, the drug was loaded into hydrogel as mentioned before. Preparation of donepezil HCl-loaded thiolated chitosan hydrogel {#Sec22} --------------------------------------------------------------- TCH was prepared according to the method described by Andreas *et al*.^[@CR67]^ with some minor modifications. Briefly, a chitosan solution (1 g/700 mL of 1% acetic acid) was prepared, and 0.2 g of 2-iminothiolane HCl (Trauts reagent) was added. pH was adjusted to nasal (pH 5.5 ± 0.15). The oxidation process during the coupling reaction was inhibited using 2-mercaptoethanol at the final concentration of 3% (v/v). After 24-h incubation (room temperature, 200 RPM), the mixture was dialyzed against 5 mM HCl, 5 mM HCl containing 1% NaCl twice, 5 mM HCl, and 1 mM HCl, respectively, and then stored at 4 °C. The drug was incorporated as before. Characterization of hydrogels {#Sec23} ----------------------------- ### Drug loading efficiency {#Sec24} Drug loading efficiency was indirectly measured; for this purpose, the swollen hydrogels incubated with DH were separately removed from the drug solutions, washed with 10 mL of distilled water, and were then dried in an oven at 37 °C for 24 h. The drug concentration remained in the solution was measured spectrophotometrically at 230 nm. The drug loading efficiency was then estimated using formula ([1](#Equa){ref-type=""}):$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Drug\,loading\,efficiency=\frac{{W}_{dg}}{{W}_{g}}$$\end{document}$$where W~dg~ is the mass of the drug loaded into hydrogel and W~g~ is the amount of initial drug. ### Gel fraction {#Sec25} The dried gels were immersed in hot water for 24 h, and water was changed every 4 h with fresh one. The gel fraction was determined by measuring the initial weight of dry gel before extraction and the weight of dry gel after extraction^[@CR15]^ using formula [2](#Equb){ref-type=""}.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Gel\,fraction\,( \% )=\frac{{W}_{f}}{{W}_{o}}\times 100$$\end{document}$$where W~f~ and W~O~ are the weight of dry gel before and after extraction, respectively. ### Degree of swelling {#Sec26} The degree of swelling is defined as the water absorption capacity of hydrogel. The weighed dried hydrogels were immersed in redistilled water and incubated for 24 h (room temperature) to reach the equilibrium state of swelling. The superficial water on the swollen gel was then removed using tissue paper, and immediately weighed. The degree of swelling was calculated using formula ([3](#Equc){ref-type=""}):$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Degree\,of\,swelling\,( \% )=\frac{{W}_{s}-{W}_{d}}{{W}_{d}}\times 100$$\end{document}$$where W~s~ is the weight of the swollen gel and W~d~ is the weight of dry gel^[@CR68]^. ### Evaluation of the hydrogels rheological properties {#Sec27} The rheograms of the bioadhesive hydrogels were obtained at 37 °C using a rheometer equipped with a cone and plate measuring system. Samples were carefully put into the plate, in which formulation shearing did not occur and the parameters of shear rate, shear stress, and viscosity were monitored to investigate the flow profile, zero-rate viscosity, and thixotropy. The rheograms were obtained from at least three determinations^[@CR69]^. Mucoadhesion measurement of hydrogels {#Sec28} ------------------------------------- Mucoadhesion strength of the hydrogels was measured to evaluate their adhesion capability onto the mucus membrane. Nasal mucosal tissue was excised from the noses of freshly slaughtered sheep and stored on ice during transport to the laboratory. The nasal mucosa was immediately separated from the underlying cartilage (the time of tissue excision and mucosa separation should not be more than 30 min) by blunt stripping using a pair of tweezers^[@CR70]^. An instron machine with a 5 K newton (N) load cell and 1.2 cm diameter metallic cylindrical probe were used for the evaluation of mucoadhesion properties. Mucosa was attached to the lower and upper end of the instrument probe using cyanoacrylate glue, and 200 mg of hydrogel samples were then placed on the lower mucosa. Upper probe which hold the nasal mucosa was lowered onto the surface of the hydrogels until the contact was occurred with the initial force of 0.5 N. After a contact period of 120 s, the upper probe was withdrawn and moved vertically upward to separate two surfaces with a constant speed of 5 mm s^−1^ until failure occurred between the surfaces. The mucoadhesion studies were performed at room temperature and 50% relative humidity. The data were collected, and the calculations were carried out using Instron® Series IX automated material tester software, version 8.32.00. Each measurement was repeated at least 5 times^[@CR70]^. The effect of contact times of 30, 120, 300 and 600 s was evaluated on the mucoadhesion properties of the various hydrogels. The instron was adjusted with a probe speed of 5 mm s^−1^ and a contact force of 0.5 N. In addition, a sheep nasal mucosal membrane was used as a biological membrane^[@CR70]^. *In vitro* drug release form hydrogels {#Sec29} -------------------------------------- *In vitro* release studies were performed at 37 °C using the Franz diffusion cell. Briefly, 0.5 g of medicated formula containing 5 mg of DH was packed into each of three cell donor chambers, in which no air bubbles were produced between the formulation and donor surface of the cellophane membrane. The receptor phase was filled with PBS (pH 5.5) and continuously stirred (100 RPM) during the experiment to ensure homogeneity. The samples were taken at the different time intervals (0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 and 5 h) and analyzed spectrophotometrically at the wavelength of 230 nm. The cumulative percentage of drug release was determined and plotted against time. All *i*n *vitro* drug release studies were performed in triplicate^[@CR70],[@CR71]^. *Ex-vivo* drug release from hydrogels {#Sec30} ------------------------------------- *Ex-vivo* experiments were performed using excised sheep mucosal tissues. The nasal mucosa was prepared as mentioned at the section of mucoadhesion measurement. Samples were taken and inserted into the Franz diffusion chambers where the apical side of the tissue was typically faced with the donor compartment. Franz diffusion cells (final volume of 12 mL) were used to measure the DH release from the hydrogel formulations. For this purpose, 5 mg of the hydrogel containing 0.5 mg of DH was placed on the upper side of the nasal mucosa. The donor half-cell was then placed on the top of the receptor half-cell and clamped. The donor and the receiver compartments containing PBS (pH 5.5) were kept in the intimate contact and the temperature was maintained at 37 °C^[@CR72]^. Histopathological effects of hydrogels {#Sec31} -------------------------------------- The sheep's nasal mucosa was prepared as mentioned before. The mucosal tissues were incubated in the various hydrogels for 5 h at 25 °C, and the tissues were fixed in 10% formalin solution for 48 h. Next, the tissues were embedded in paraffin wax. They were then sectioned (7 µm) and placed on glass slides. Later, they were stained with hematoxylin and eosin (H&E). The sections were evaluated using a light microscope connected to a digital camera to measure any changes. Mucosal tissues incubated in isotonic PBS (5 h) were considered as a control for comparison^[@CR70]^. Preparation of liposomes {#Sec32} ------------------------ Liposomes were synthesized using reverse phase evaporation technique. For this purpose, DPPC: CHOL (1.6:1 molar ratio) were dissolved in chloroform, and then the organic solvent was removed using rotary evaporator to form a thin film^[@CR73]^. Next, 5 mg of DH was dissolved in PBS (pH 5.5), and the resultant was added into the thin film^[@CR74]^. Also, LDH was prepared by a method described by Turner *et al*.^[@CR63]^. Briefly, 150 mg of DPPC and 50 mg of CHOL (1.6:1 molar ratio) were dissolved in chloroform in a round flask. Next, 5 mg of DH was dissolved in PBS and injected into lipid solution. The round flask was then capped with a glass stopper and sonicated for 5 min. Next, the solvent was evaporated using rotatory evaporator at 45 °C under vacuum condition, and then the resultant was vigorously vortexed for 5 min. The resultant was again evaporated to make sure the organic solvent was completely removed. The cycles of 10-min drying and 10-min vortexing were repeated thrice^[@CR64]^. Characterization of liposomes {#Sec33} ----------------------------- The mean particle size and the morphology of the liposomes were determined using laser particle size analysis and SEM imaging, respectively^[@CR75]^. Also, drug entrapment efficiency was evaluated using dialysis method. Briefly, the liposome suspension was transferred into a dialysis tube membrane (donor compartment). The dialysis bag was immersed into 200 mL of PBS (pH 5.5) and stirred (120 RPM, 4 h). Next, the drug concentration in receptor compartment was determined through the measurement of absorbance at 230 nm^[@CR71],[@CR76]^. The drug entrapment efficiency (EE) was then calculated using formula ([4](#Equd){ref-type=""}).$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$EE( \% )=\frac{initial\,drug\,concentration\,-\,drug\,concentration\,in\,receptor\,compartment}{initial\,drug\,concentration}\times 100$$\end{document}$$ Preparation of liposomal hydrogel {#Sec34} --------------------------------- The incorporation of the liposomes into the TCH was achieved by slow mechanical mixing of liposome and hydrogel using a spatula^[@CR77]^. *In vitro* drug release from LDH incorporated into TCH {#Sec35} ------------------------------------------------------ *In vitro* drug release study was performed using cellophane dialysis membrane (cut-off 12--14 kDa). The membrane was hydrated with PBS as the receptor medium (pH 5.5) for 12 h. Next, 0.5 g of the liposomal hydrogel containing 0.5 mg DH was suspended into 12 mL of PBS, transferred into dialysis bag as the donor medium, and stirred (37 °C, 120 RPM). Two milliliter aliquots were then taken at the fixed time intervals (0, 60, 120, 180, 240, 300, and 360 min) and were instantly replaced with an equal volume of the fresh buffer. All samples were analyzed for DH content by spectrophotometry at 230 nm. The experiment was carried out in triplicate^[@CR78]^. Stability study {#Sec36} --------------- According to the obtained results, TCH provided the most promising results among different hydrogels in terms of mucoadhesion properties and histopathological effects; therefore, from now on, we will consider only TCH in the subsequent experiments. LDH and LDH incorporated into TCH were stored at 4 °C and 20 °C for 28 days. The mean size of particles in the formulations were examined using SEM at the different time intervals (7, 14, 21, and 28 days) to estimate the effect of different storage temperatures on physical stability of the liposomes^[@CR74]^. Chromatographic system and conditions {#Sec37} ------------------------------------- The analytical detection of DH was carried out using a Waters Tandem Quadrupole Mass Spectrometer (TQD^TM^), with cooling autosampler and column oven, equipped with electrospray ionization (ESI) interface. The ESI source was set in negative ionization mode as shown in Table [4](#Tab4){ref-type="table"}. An ACQUITY UPLC BEH C18 column (2.1 mm × 50 mm, 1.7 μm) was used for the drug separation using mobile phase of 5 mM ammonium formate and 1% acetonitrile in water (pH 3) as solvent A and 0.1% formic acid in acetonitrile as solvent B with gradient elution at a flow rate of 0.4 mL/min. The gradient began with 80% eluent A and changed linearly to 20% A within 3.5 min at 40 °C. In addition, to control data acquisition, MassLynx V4.1 software with automated data processing using the QuanLynx software was utilized (Waters Corp., Milford, USA)^[@CR79]^.Table 4Setting for mass spectra MS/MS detection of DH.Source (ESI+) and analyzerSettingsCapillary voltage (kv)1Cone voltage (v)35Extractor (v)1Radio frequency lens (v)0.2Source temperature (C)150Desolvation temperature (C)400Cone gas flow (L/h)50Desolvation gas flow (L/h)800Collision energy (eV)27Collision gas flow (mL/min)0.25 Preparation of DH standards and quality control samples {#Sec38} ------------------------------------------------------- Stock standard solutions of DH and IS from clopidogrel were synthesized in methanol (0.5 mg/mL) and stored in 4-mL amber glass vials at −20 °C. Also, various working standard solutions of DH (0.01--100 µg/mL) and 8 µg/mL of IS were prepared in methanol and freezed in amber vials at −20 °C. In addition, the solutions of plasma standard calibration were prepared in replicate (n = 6) at the drug concentrations of 1, 5, 10, 20, 50 ng/mL. Next, 40 µL of the DH working solutions and 40 µL of the IS were diluted by 200 µL of blank rabbit plasma. Low, medium and high concentration of quality control (QC) samples at the drug concentrations of 1, 10, 50 ng/mL with 100 ng/mL of IS were prepared as well^[@CR79]^. Pharmacokinetics study {#Sec39} ---------------------- Seven New Zealand white rabbits with the mean weight of 3 ± 0.5 kg obtained from KSU Laboratory Animal Facility were used in this study as two groups (n = 3 and 4). All the methods and protocols of animal studies were in accordance with the relevant institutional guidelines and regulations and were approved and permitted by Animal Experimentation Ethics Committee, King Saud University (Riyadh, Saudi Arabia). The animals were housed and fasted for 12 h before the study and had free access to water during the experiment. A cannula was inserted into marginal ear vein for blood sampling and flushed with heparin-normal saline solution. The first group received LDH incorporated into TCH through nasal route and the second group received oral tablet of commercial DH. LDH incorporated into TCH formulation was administered by nasal dropper having a wide orifice inserted 5 g of liposomal hydrogels containing 5 mg of DH into nostrils of the rabbits while they were in supine position. The rabbit was kept in this position for 1 min after administration. A total dose of 5 mg of DH was given to rabbits and 5 mg oral tablet was dispersed in acacia and administered through oral route by dropper^[@CR80]^. One milliliter of blood samples collected from cannulated marginal vein before dose administration was considered as a reference level (time point zero) and at time intervals (1, 2, 4, 6 h) in heparinized vacuum glass tubes after administration. The cannula was flushed with heparin in normal saline solution after taking each sample. The plasma was separated by centrifugation method at 1000 RPM for 10 min and stored at −20 °C until the time of analysis^[@CR80]^. To precipitate the proteins of the collected plasma samples, 200 µL of aliquoted samples (or calibration standard or QC sample) and 40 µL of IS working solutions were transferred into 1.5-mL Eppendorf tubes, and the mixtures were vortexed for 10 s. Next, 1 mL of acetonitrile was added, and the mixture was vortexed for 1 min, followed by centrifugation at 20,000 RPM for 15 min at 4 °C. The supernatant was transferred into a 1.5-mL Eppendorf tube and evaporated under a gentle stream of nitrogen. The residue was reconstituted in 100 µL of mobile phase, vortexed for 1 min, centrifuged at 4000 RPM for 4 min, and transferred into a plastic autosampler vial with pre-slit septum (Waters, USA). Two microliters of the resultant were then injected into the UPLC MS/MS system^[@CR79]^. To evaluate brain concentration of DH, the rabbits' brain was removed, weighed and homogenized to liquefy the brain tissues. Next, 0.2 µg of the resultant was subjected to the determination of DH brain concentration using UPLC MS/MS system using the above-mentioned protocol^[@CR81],[@CR82]^. Data and statistical analysis {#Sec40} ----------------------------- The *in vitro* results were shown as the mean ± standard deviation (SD) of triplicate (n = 3), while the *in vivo* results were expressed as mean ± SD of triplicate (for standard curve) and calculated by linear regression without weighing using formula [5](#Eque){ref-type=""}:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y=a+bX$$\end{document}$$where Y is the ratio of area under peak (AUP) of the drug to the IS, a is intercept, b is slope, and X is the DH concentration. The pharmacokinetics parameters were calculated using model-independent methods and the terminal elimination rate constant (λ~n~) was estimated using linear regression analysis of the terminal portion of the log-linear blood concentration--time profile of DH. Also, the terminal elimination half-life (t~1/2~) was calculated from the terminal elimination rate constant using formula [6](#Equf){ref-type=""}^[@CR79]^:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${t}_{1/2}=\frac{0.693}{{\lambda }_{n}}$$\end{document}$$ The C~max~ and T~max~ were obtained directly from the individual blood levels. The AUCs (µg mL^−1^ h) were measured by the linear trapezoidal rule and extrapolated to time infinity by the addition of C Last/λ~n~, where C Last was the concentration of the last measured plasma sample. The apparent body clearance (Cl/F) was measured using formula [7](#Equg){ref-type=""}^[@CR79]^:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{Cl}{F}=\frac{Dose}{AUC}$$\end{document}$$ The concentration difference at each day was examined using student t-test, and one-way analysis of variance (ANOVA) was applied to evaluate the reproducibility of the assay using IBM SPSS 20 Statistics. The level of confidence was 95%^[@CR79]^. Supplementary information ========================= {#Sec41} Supplementary Information **Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information ========================= **Supplementary information** accompanies this paper at 10.1038/s41598-019-46032-y. The authors are thankful to colleagues of Department of Pharmaceutical Science, College of Pharmacy at the King Saud University for their invaluable help to perform the study. S.A.H performed the experimental works, S.E.A. analyzed the data and wrote the main manuscript text; all the works were supervised by M.A.R., M.M.E.K. and I.A.A. and all authors reviewed the manuscript. The authors declare no competing interests.
{ "pile_set_name": "PubMed Central" }
// // XCFPopEvents.h // XCFApp // // Created by callmejoejoe on 16/4/4. // Copyright © 2016年 Joey. All rights reserved. // #import <Foundation/Foundation.h> @class XCFPopEvent; @interface XCFPopEvents : NSObject /** 导航个数 */ @property (nonatomic, assign) NSInteger count; /** 导航数据 */ @property (nonatomic, strong) NSArray<XCFPopEvent *> *events; @end
{ "pile_set_name": "Github" }
--- abstract: 'The Relativistic Random Phase Approximation (RRPA) is derived from the Time-dependent Relativistic Mean Field (TD RMF) theory in the limit of small amplitude oscillations. In the [*no-sea*]{} approximation of the RMF theory, the RRPA configuration space includes not only the ususal particle-hole $ph$-states, but also $\alpha h$-configurations, i.e. pairs formed from occupied states in the Fermi sea and empty negative-energy states in the Dirac sea. The contribution of the negative energy states to the RRPA matrices is examined in a schematic model, and the large effect of Dirac sea states on isoscalar strength distributions is illustrated for the giant monopole resonance in $^{116}$Sn. It is shown that, because the matrix elements of the time-like component of the vector meson fields which couple the $\alpha h$-configurations with the $ph$-configurations are strongly reduced with respect to the corresponding matrix elements of the isoscalar scalar meson field, the inclusion of states with unperturbed energies more than 1.2 GeV below the Fermi energy has a pronounced effect on giant resonances with excitation energies in the MeV region. The influence of nuclear magnetism, i.e. the effect of the spatial components of the vector fields is examined, and the difference between the non-relativistic and relativistic RPA predictions for the nuclear matter compression modulus is explained.' address: | 1. Physics Department, TU Munich, D-85748 Garching, Germany\ 2. China Institute of Atomic Energy, Beijing, P.R. of China\ 3. Institut de Physique Nucléaire, IN2P3-CNRS, F-91406 Orsay Cedex, France\ author: - | P. Ring$^{1}$, Zhong-yu Ma$^{2}$[^1] , Nguyen Van Giai$^3$, D. Vretenar$^{1}$[^2] ,\ A. Wandelt$^{1}$, and Li-gang Cao$^{2}$ date: 'January 25, 2001' title: 'The time-dependent relativistic mean-field theory and the random phase approximation' --- Introduction ============ Models based on the Relativistic Mean Field (RMF) approximation provide a microscopic self-consistent description of nuclear structure phenomena (for reviews, see Refs. [@SW.86; @Rei.89; @Ser.92; @Rin.96]). In this framework classical equations of motion are derived self-consistently from a fully relativistic Lagrangian. Vacuum polarization effects, as well as Fock exchange terms, are usually not taken into account explicitly. This framework, however, is based on an effective theory and the parameters of effective interactions are determined from a set of experimental data. In adjusting the parameters of the effective Lagrangian, a large part of vacuum polarization effects and effects of exchange terms are already taken into account. In fact, both contributions have been treated explicitly in nuclear matter and in some finite spherical nuclei [@HS.83; @BMG.87; @BFG.93; @HS.84; @Was.91], and it has been shown that these contributions are not small. However, by renormalizing the parameters of the effective Lagrangian, virtually identical results for ground-state properties have been obtained without the inclusion of vacuum polarization and exchange terms. Essential for a quantitative description of properties of complex nuclei are the non-linear terms in the meson sector [@BB.77], which in a simple way include an effective density dependence of the meson coupling parameters. The relativistic mean-field models have been mostly applied in the description of ground-state properties of nuclei all over the periodic table. In several cases this framework has also been very successfully applied to excited states. For example, the cranked relativistic mean-field model [@KR.89] describes a large variety of phenomena in rotational bands of superdeformed nuclei [@AKR.96; @ALR.98; @AKR.99]. Another example is the Time-dependent RMF model which has been used to describe the dynamics of giant resonances in nuclei [@VBR.95; @PVR.96; @VLB.97]. From the time evolution of multipole moments of the single-particle density, the excitation energies of giant resonances can be determined. Excellent agreement with experimental values of isoscalar and isovector giant monopole, giant quadrupole and the isovector giant dipole resonances has been obtained. The disadvantage of the time-dependent approach is that in some cases the large computational effort prevents an accurate description of excited states as, for instance, the dynamics of low-lying collective excitations. The Relativistic Random Phase Approximation (RRPA) represents the small amplitude limit of the time-dependent relativistic mean-field theory. Some of the earliest applications of the RRPA [@Fur.85; @HG.89; @BC.88; @DF.90; @HP.90] to finite nuclei include the description of low-lying negative parity excitations in $^{16}$O [@Fur.85], and studies of isoscalar giant resonances in light and medium nuclei [@HG.89]. These RRPA calculations, however, were based on the most simple, linear $\sigma - \omega$ relativistic mean field model. Only recently non-linear meson self-interaction terms have been taken included in RRPA calculations [@MGT.97; @MTG.97]. However, in these calculations the RRPA configuration space included only ordinary particle-hole pairs. This seems a reasonable approximation, since the states formed from occupied positive-energy states in the Fermi sea and empty negative energy states in the Dirac sea, have unperturbed energies more than 1.2 GeV below the Fermi level. It turned out, however, that excitation energies of isoscalar resonances calculated in this way were very different from those obtained in the TD RMF approach with the same effective interactions [@GMT.99; @VRL.99]. Since it is well known that in the non-relativistic framework the RPA corresponds to the small amplitude limit of time-dependent Hartree-Fock (TDHF) (see, e.g., Ref. [@RS.80]), these discrepancies remained an open puzzle for a couple of years. Only recently it has been shown [@VWR.00; @MGW.00] that, in order to reproduce results of time-dependent relativistic mean-field calculations for giant resonances, the RRPA configuration space must contain negative-energy states from the Dirac sea. In principle, if the Dirac sea were fully occupied, these configurations would be forbidden by the Pauli principle. However, in the [*no-sea*]{} approximation the negative-energy states do not contribute to the nucleon densities, i.e. these states are not occupied. It is, thus, possible to form $\alpha h$ pairs ($\alpha$ empty state in the Dirac sea, $h$ occupied state in the Fermi sea) and include them in the RRPA configuration space. Although formally possible, and also necessary in order to preserve symmetries, this procedure raises some serious conceptual problems because the $\alpha h$ configurations have negative unperturbed excitations energies. This means that the energy surface is no longer positive definite. The static solutions of the RMF equations correspond to saddle points on the energy surface, rather than to minima as in non-relativistic Hartree-Fock calculations. It is not, therefore, a priori clear that small amplitude oscillations around these stationary solutions will be stable. Furthermore, it is not obvious why configurations with unperturbed energies more than 1.2 GeV below the Fermi level have such a pronounced effect on the excitation energies of giant resonances in the MeV region. In non-relativistic calculations, for instance, it has been noted [@RS.74] that $ph$-configurations with very large excitation energies affect the position of the spurious mode, but they have no effect on the excitation energies of giant resonances. The purpose of the present investigation is to clarify some of these problems and to understand in a better way the relation between the RRPA and the TD RMF in the [*no-sea*]{} approximation. The paper is organized as follows: the time-dependent RMF model is analyzed in Sec. II. In Sec. III the RRPA is derived from the TDRMF equations in the limit of small amplitude motion. In particular, we discuss the importance of $\alpha h$-pairs in the RRPA configuration space. In Sec. IV we introduce a relativistic extension of the Brown-Bolsterli model and solve it in the linear response approximation. It is shown how large matrix elements, which couple the $\alpha h$-sector with the $ph$-configurations, can arise in the RRPA matrices. The large effect of Dirac sea states on isoscalar strength distributions is illustrated in Sec. V. The results are summarized in Sec. VI. The time-dependent relativistic mean-field model ================================================ In quantum hadrodynamics the nucleus is described as a system of Dirac nucleons which interact through the exchange of virtual mesons and photons. The model is based on the one-boson exchange description of the nucleon-nucleon interaction. The Lagrangian density reads [@SW.86] $$\begin{aligned} {\cal L} &=&\bar{\psi}\left( i\gamma \cdot \partial -m\right) \psi ~+~\frac{1}{2}\partial \sigma \cdot \partial \sigma -\frac{1}{2}m_{\sigma }\sigma ^{2} \nonumber \\ &&-~\frac{1}{4}\Omega _{\mu \nu }\Omega ^{\mu \nu }+\frac{1}{2}m_{\omega }^{2}\omega ^{2}~-~\frac{1}{4}{\vec{{\rm R}}}_{\mu \nu }{\vec{{\rm R}}}^{\mu \nu }+\frac{1}{2}m_{\rho }^{2}\vec{\rho}^{\,2}~-~\frac{1}{4}{\rm F}_{\mu \nu }{\rm F}^{\mu \nu } \nonumber \\ &&-~\bar{\psi}[g_{\sigma }\sigma ~+~g_{\omega }\gamma \cdot \omega ~+~g_{\rho }\gamma \cdot \vec{\rho}\vec{\tau} ~+~e\gamma \cdot A \frac{(1-\tau_{3})}{2}]\psi~. \label{E2.1}\end{aligned}$$ Vectors in isospin space are denoted by arrows, and bold-faced symbols will indicate vectors in ordinary three-dimensional space; a dot denotes the scalar product in Minkowski space ($\gamma \cdot \omega =\gamma^\mu\omega_\mu =\gamma _{0}\omega _{0}-{{\mbox{\boldmath $\gamma$}}}{{\mbox{\boldmath $\omega$}}}$). The Dirac spinor $\psi$ denotes the nucleon with mass $m$. $m_\sigma$, $m_\omega$, and $m_\rho$ are the masses of the $\sigma$-meson, the $\omega$-meson, and the $\rho$-meson, and $g_\sigma$, $g_\omega$, and $g_\rho$ are the corresponding coupling constants for the mesons to the nucleon, and $e^{2}/\hbar c =1/137.036$. Eq.(\[E2.1\]), $\Omega ^{\mu \nu }$, $\vec{R}^{\mu \nu }$, and $F^{\mu \nu }$ denote the field tensors of the vector fields $\omega $, $\rho $, and of the photon, respectively: $$\begin{aligned} \Omega ^{\mu \nu } &=&\partial ^{\mu }\omega ^{\nu }-\partial ^{\nu }\omega ^{\mu }~, \nonumber \\ \vec{R}^{\mu \nu } &=&\partial ^{\mu }\vec{\rho}^{\,\nu }-\partial ^{\nu }\vec{\rho}^{\,\mu }~, \nonumber \\ F^{\mu \nu } &=&\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }~. \label{E2.2}\end{aligned}$$ If the bare masses $m$, $m_{\omega }$, and $m_{\rho }$ are used for the nucleons and the $\omega $ and $\rho $ mesons, there are only four free model parameters: $m_{\sigma }$, $g_{\sigma }$, $g_{\omega }$ and $g_{\rho }$. Their values can be adjusted to experimental data of just few spherical nuclei. This simple model, however, is not flexible enough for a quantitative description of properties of complex nuclear systems. An effective density dependence has been introduced [@BB.77] by replacing the quadratic $\sigma $-potential $\frac{1}{2}m^2_{\sigma }\sigma ^{2}$ with a quartic potential $U(\sigma )$ $$U(\sigma )~=~\frac{1}{2}m_{\sigma }^{2}\sigma ^{2}+\frac{1}{3}g_{2}\sigma ^{3}+\frac{1}{4}g_{3}\sigma ^{4}~. \label{E2.3}$$ The potential includes the nonlinear $\sigma $ self-interaction, with two additional parameters $g_{2}$ and $g_{3}$. The corresponding Klein-Gordon equation becomes nonlinear, with a $\sigma $-dependent mass $m^2_{\sigma }(\sigma )=$ $m^2_{\sigma }+g_{2}\sigma +g_{3}\sigma ^{2}$. More details on the relativistic mean-field formalism can be found in Refs. [@SW.86; @Rei.89; @Ser.92; @Rin.96]. From the Lagrangian density the set of coupled equations of motion is derived. The Dirac equation for the nucleons $$\begin{aligned} i\partial _{t}\psi _{i} &=&\hat{h}(t)\psi _{i} \nonumber \\ &=&\left\{{{\mbox{\boldmath $\alpha$}}}\lbrack-i{{\mbox{\boldmath $\nabla$}}}- {\bf V(r},t{\bf )]+}V({\bf r},t)+ {\bf \beta }\big(m-S({\bf r},t)\big)\right\} \psi _{i}. \label{E2.4}\end{aligned}$$ If one neglects retardation effects for the meson fields, the time-dependent mean-field potentials $$\begin{aligned} S({\bf r},t) &=&-g_{\sigma }\sigma ({\bf r},t)~, \nonumber \\ V_{\mu }({\bf r},t) &=&g_{\omega }\omega _{\mu }({\bf r},t)+g_{\rho }\vec{\tau}\vec{\rho}_{\mu }({\bf r},t)+eA_{\mu }({\bf r},t)\frac{(1-\tau _{3})}{2}~, \label{E2.5}\end{aligned}$$ are calculated at each step in time from the solution of the stationary Klein-Gordon equations $$\begin{aligned} \left[ -\Delta +m_{\sigma }^{2}\right] \,\sigma ({\bf r,}t) &=&-g_{\sigma }\,\rho _{s}({\bf r,}t)-g_{2}\,\sigma ^{2}({\bf r,}t)-g_{3}\,\sigma ^{3}({\bf r,}t)~, \nonumber \\ \left[ -\Delta +m_{\omega }^{2}\right] \,\omega _{\mu }({\bf r,}t) &=&g_{\omega }\,j_{\mu }({\bf r,}t)~, \nonumber \\ \left[ -\Delta +m_{\rho }^{2}\right] \,\vec{\rho}_{\mu }({\bf r,}t) &=&g_{\rho }\,\,\vec{j}_{\mu }({\bf r},t)~, \nonumber \\ -\Delta \,A_{\mu }({\bf r,}t) &=&e\,j_{c\mu }({\bf r},t)~. \label{E2.6}\end{aligned}$$ This approximation is justified by the large masses in the meson propagators. Retardation effects can be neglected because of the short range of the corresponding meson exchange forces. In the mean-field approximation only the motion of the nucleons is quantized, the meson degrees of freedom are described by classical fields which are defined by the nucleon densities and currents. The single-particle spinors $\psi_i~(i=1,2,...,A)$ form the A-particle Slater determinant $|\Phi(t)\rangle$. The nucleons move independently in the classical meson fields, i.e. residual two-body correlations are not included, and the many-nucleon wave function is a Slater determinant at all times. The sources of the fields in the Klein-Gordon equations are the nucleon densities and currents calculated in the [*no-sea*]{} approximation $$\begin{aligned} \rho _{s}({\bf r},t) &=&\sum\limits_{i=1}^{A}\bar{\psi}_{i}^{{}}({\bf r},t)\psi _{i}^{{}}({\bf r},t)~, \nonumber \\ j_{\mu }({\bf r},t) &=&\sum\limits_{i=1}^{A}\bar{\psi}_{i}^{{}}({\bf r},t)\gamma _{\mu }\psi _{i}^{{}}({\bf r},t)~, \nonumber \\ \vec{j}_{\mu }({\bf r},t) &=&\sum\limits_{i=1}^{A}\bar{\psi}_{i}^{{}}({\bf r},t)\vec{\tau}\gamma _{\mu }\psi _{i}^{{}}({\bf r},t)~, \nonumber \\ j_{c\mu }({\bf r},t) &=&\sum\limits_{i=1}^{Z}\bar{\psi}_{i}^{{}}({\bf r},t)\gamma _{\mu }\psi _{i}^{{}}({\bf r},t)~. \label{E2.7}\end{aligned}$$ where the summation is over all occupied states in the Fermi sea. In the [*no-sea*]{} approximation the negative-energy states do not contribute to the densities and currents, i.e. vacuum polarization is explicitly neglected. However, as already discussed in the Introduction, this is an effective theory with the parameters of the Lagrangian determined from a set of experimental data. In adjusting the parameters of the effective Lagrangian, a large part of vacuum polarization effects is therefore already taken into account. It should be emphasized that the [*no-sea*]{} approximation is essential for practical applications of the relativistic mean-field model. The stationary solutions of the relativistic mean-field equations describe the ground-state of a nucleus. They correspond to stationary points on the relativistic energy surface. The Dirac sea, i.e. the negative energy eigenvectors of the Dirac hamiltonian, is different for different nuclei. This means that it depends on the specific solution of the set of non-linear RMF equations. The Dirac spinors which describe the ground-state of a finite nucleus (positive energy states) can be expanded, for instance, in terms of vacuum solutions, which form a complete set of plane wave functions in spinor space. This set is only complete, however, if in addition to the positive energy states, it also contains the states with negative energies, in this case the Dirac sea of the vacuum. Positive energy solutions of the RMF equations in a finite nucleus automatically contain vacuum components with negative energy. In the same way, solutions which describe excited states, as for instance states with different angular momenta which are solutions of the cranked RMF equations, contain negative energy components which correspond to the ground-state solution. This is also true for the solutions of the time-dependent problem. Although for the stationary solutions the negative-energy states do not contribute to the densities in the [*no-sea*]{} approximation, their contribution is implicitly included in the time-dependent calculation. The coupled system of RMF equations describes the time-evolution of A nucleons in the effective mean-field potential. Starting from the self-consistent solution which describes the ground-state of the nucleus, initial conditions can be defined which correspond, for instance, to excitations of giant resonances in experiments with electromagnetic or hadron probes. For example, the one-body proton and neutron densities can be initially deformed and/or given some initial velocities. The resulting mean-field dynamics can be described by the time-evolution of the collective variables. In coordinate space for example, these will be the multipole moments of the density distributions. At each time $t$, the Dirac spinors $\psi _{i}(t)$ can be expanded in terms of the complete set of solutions of the stationary Dirac equation $\psi _{k}^{(0)}$ $$\psi _{i}({\bf r},t)=\sum_{k}c_{k}(t)\psi _{k}^{(0)}({\bf r})\text{e}^{-i\varepsilon _{k}t}+\sum_{\alpha}c_{\alpha }(t)\psi _{\alpha }^{(0)}({\bf r})\text{e}^{-i\varepsilon _{\alpha }t}~, \label{E2.9}$$ where the index $k$ runs over all positive energy eigen-solutions $\varepsilon _{k}>0$ (hole states $h$ in the Fermi sea, and particle states $p$ above the Fermi sea), and the index $\alpha $ denotes eigen-solutions with negative energy $\varepsilon _{\alpha }<0$. We follow the time evolution of $A$ Dirac spinors which at time $t=0$ form the Fermi sea of the stationary solution. This means that at each time we have a [*local*]{} Fermi sea of $A$ time-dependent spinors which, of course, contain components of negative-energy solutions of the stationary Dirac equation. One could also start with the infinitely many negative energy solutions $\psi _{\alpha}({\bf r},t=0)$ ($\varepsilon_\alpha <0$), and propagate them in time with the same hamiltonian $\hat{h}(t)$. Since the time-evolution operator is unitary [@RS.80] $$i\partial _{t}\left\langle \psi _{i}|\psi _{a}\right\rangle =\left\langle \psi _{i}|h^{\dagger }-h|\psi _{a}\right\rangle =0, \label{E2.9a}$$ the states which form the [*local*]{} Dirac sea are orthogonal to the [*local*]{} Fermi sea at each time. This is the meaning of the [*no-sea*]{} approximation in the time-dependent problem. For small-amplitude oscillations around the stationary solution, the coefficients $c_{\alpha }(t)$ of the negative energy components in (\[E2.9\]) are, of course, small. We will first consider linear relativistic mean-field models ($g_{2}=g_{3}=0$ in (\[E2.3\])). In the instantaneous approximation, i.e. neglecting the time derivatives $\partial_t^2$ in the Klein-Gordon equations, the solutions for the mean-fields are calculated from $$\begin{aligned} \sigma ({\bf r},t) &=&g_{\sigma }\int D_{\sigma }({\bf r,r}^{\prime })\rho _{s}({\bf r}^{\prime },t)d^{3}r~, \nonumber \\ \omega _{\mu }({\bf r},t) &=&g_{\omega }\int D_{\omega }({\bf r,r}^{\prime })j_{\mu }({\bf r}^{\prime },t)d^{3}r~, \nonumber \\ \vec{\rho}_{\mu }({\bf r},t) &=&g_{\rho }\int D_{\rho }({\bf r,r}^{\prime })\vec{j}_{\mu }({\bf r}^{\prime },t)d^{3}r~, \nonumber \\ A_{\mu }({\bf r},t) &=&e\int D_{photon}({\bf r,r}^{\prime })j_{c\mu }({\bf r}^{\prime },t)d^{3}r~. \label{E2.10}\end{aligned}$$ The propagators have the Yukawa form $$D_{\phi }({\bf r},{\bf r}^{\prime })\,=\pm \frac{1}{4\pi }\frac{\text{e}^{-m_{\phi }|{\bf r}-{\bf r}^{\prime }|}}{|{\bf r}-{\bf r}^{\prime }|}~, \label{E2.11}$$ where $\phi $ denotes the mesons $\sigma $, $\omega $, $\rho $, and the photon. The plus (minus) sign is for vector (scalar) fields. In the non-linear case an analytic solution of the Klein-Gordon equation is, of course, no longer possible. The corresponding meson field is a non-linear functional of the density and currents. The relativistic single-particle density matrix reads $$\hat{\rho}({\bf r},{\bf r}^{\prime },t)=\sum\limits_{i=1}^{A}|\psi _{i}^{{}}({\bf r},t)\rangle \langle \psi _{i}^{{}}({\bf r}^{\prime },t)|~. \label{E2.12}$$ If the Dirac spinor is written in terms of large and small components $$|\psi _{i}^{{}}({\bf r},t)\rangle =\left( \begin{array}{c} \,\,\,\,f_{i}({\bf r},t) \\ ig_{i}({\bf r},t) \end{array} \right), \label{E2.13}$$ the density matrix takes the form $$\rho ({\bf r},{\bf r}^{\prime },t)=\left( \begin{array}{cc} \,\,\,\sum\limits_{i=1}^{A}f_{i}^{{}}({\bf r},t)f_{i}^{\dagger }({\bf r}^{\prime },t) & -i\sum\limits_{i=1}^{A}f_{i}^{{}}({\bf r},t)g_{i}^{\dagger }({\bf r}^{\prime },t) \\ i\sum\limits_{i=1}^{A}g_{i}^{{}}({\bf r},t)f_{i}^{\dagger }({\bf r}^{\prime },t) & \,\,\,\,\,\,\sum\limits_{i=1}^{A}g_{i}^{{}}({\bf r},t)g_{i}^{\dagger }({\bf r}^{\prime },t) \end{array} \right) ~. \label{E2.14}$$ Further, a relativistic two-body interaction is defined $$\hat{V}=\int d^{3}r_{1}d^{3}r_{2}\hat{\psi}^{\dagger }({\bf r}_{1})\hat{\psi}^{\dagger }({\bf r}_{2})V({\bf r}_{1},{\bf r}_{2})\hat{\psi}({\bf r}_{1})\hat{\psi}({\bf r}_{2})~, \label{E2.15}$$ where $\hat{\psi}^{\dagger }$and $\hat{\psi}$ are the Dirac field creation and annihilation operators, and $$V({\bf r}_{1},{\bf r}_{2})=D_{\sigma }({\bf r}_{1},{\bf r}_{2})\,\beta ^{(1)}\beta ^{(2)}+D_{\omega }^{{}}({\bf r}_{1},{\bf r}_{2}) \left( 1-{{\mbox{\boldmath $\alpha$}}}^{(1)}{{\mbox{\boldmath $\alpha$}}}^{(2)}\right) ~. \label{E2.16}$$ In order to simplify the notation, we omit the $\rho $-meson and the photon, though they are, of course, included in actual applications of the relativistic mean-field model. Their contribution to the matrix elements of $V({\bf r}_{1},{\bf r}_{2})$ is, however, much smaller than that of the $\sigma$ and $\omega$ mesons. By introducing an arbitrary complete spinor basis (the indices $k,l,\dots $ denote both positive and negative energy states), the two-body interaction operator can be written in the form $$\hat{V}=\frac{1}{2}\sum_{kk^{\prime }ll^{\prime }}V_{klk^{\prime }l^{\prime }}\hat{\psi} _{k}^{\dagger }\hat{\psi} _{l}^{\dagger } \hat{\psi} _{l^{\prime }}^{{}}\hat{\psi} _{k^{\prime }}^{{}}~. \label{E2.17}$$ The single-particle equation of motion corresponds to the time-dependent relativistic Hartree problem $$i\partial _{t}\psi _{i}=\hat{h}(\hat{\rho})\psi _{i}~, \label{E2.18}$$ with the Dirac Hamiltonian $$\hat{h}(\hat{\rho})= {{\mbox{\boldmath $\alpha$}}}\,{\bf p} + \beta(m+\,\Sigma(\hat{\rho})), \label{E2.19}$$ and the mass operator $$\Sigma _{kl}(\hat{\rho})=\sum_{k^{\prime }l^{\prime }}V_{kl^{\prime }lk^{\prime }}\rho _{k^{\prime }l^{\prime }}~. \label{E2.20}$$ The corresponding equation of motion for the density operator reads $$i\partial _{t}\hat{\rho}=\left[ \hat{h}(\hat{\rho}),\hat{\rho}\right] ~, \label{E2.21}$$ in full analogy with the non-relativistic Hartree-Fock problem (see, e.g., Ref. [@RS.80]). In expressing the TD RMF equations (\[E2.4\]-\[E2.6\]) in terms of a relativistic two-body interaction, we have eliminated the meson degrees of freedom by using the Yukawa form (\[E2.11\]) of the meson propagators. This applies, of course, only to Lagrangians that do not contain non-linear meson self-interactions. The non-linear couplings are, however, essential for a realistic description of nuclear properties. Formally, also in this case the Klein-Gordon equations can be solved at each step in time, and the resulting meson fields are non-linear functionals of the densities and currents. The Dirac operator has still the form of Eq. (\[E2.19\]), but the mass-operator $\Sigma _{kl}(\hat{\rho})$ becomes a much more complicated functional of the single-particle density. The numerical solution of the full time-dependent problem with non-linear meson self-interactions does not present particular difficulties (see Refs. [@VBR.95; @PVR.96; @VLB.97]). Much more difficult, however, is to eliminate the meson degrees of freedom and to derive a relativistic two-body interaction in the general case of large amplitude motion. This has only been done in the small amplitude limit [@MGT.97]. The $\sigma $-field and the scalar density $\,\rho _{s}$ are expanded in the neighborhood of the stationary ground-state solutions $$\begin{aligned} \sigma ({\bf r,}t{\bf )} &=&{\bf \,}\sigma ^{(0)}({\bf r)+}\delta \sigma ({\bf r,}t{\bf )}~, \label{E2.22} \\ \,\rho _{s}({\bf r,}t{\bf )} &=&{\bf \,}\rho _{s}^{(0)}({\bf r)+}\delta \rho _{s}({\bf r,}t{\bf )}~.\end{aligned}$$ The corresponding Klein-Gordon equation (\[E2.6\]) for the $\sigma $-field is solved by linearization, i.e. up to terms linear in $\delta \sigma$ we obtain $$\left[ -\Delta +m_{\sigma }^{2}({\bf r})\right] \delta \,\sigma ({\bf r,}t)=-g_{\sigma }\delta \,\rho _{s}({\bf r,}t)~, \label{E2.23}$$ with $$m_{\sigma }^{2}({\bf r})=\left. \frac{\partial ^{2}U}{\partial \sigma ^{2}}\right| _{\sigma =\sigma ^{(0)}({\bf r})}~. \label{E2.24}$$ The $\sigma $-meson propagator is defined by the equation $$\left[ -\Delta +m_{\sigma }^{2}({\bf r})\right] D_{\sigma }({\bf r},{\bf r}^{\prime })=-\delta ({\bf r-r}^{\prime })~, \label{E2.25}$$ and it has been determined numerically in the RRPA calculations of Refs. ( [@MGT.97; @MTG.97; @GMT.99]). In the following only the small amplitude limit will be studied, and therefore we do not need to worry about the more general problem of large amplitude motion. The small amplitude limit of TD RMF and the relativistic RPA ============================================================ In this section we study the response of the density matrix $\hat{\rho}(t)$ to an external one-body field $$\hat{F}(t)=\hat{F}\text{e}^{-i\omega t}+h.c.~, \label{E3.1}$$ which oscillates with a small amplitude. Assuming that in the single-particle space this field is represented by the operator $$\hat{f}(t) = \sum_{kl} \, f_{kl}(t) \; \hat{a}^{\dagger}_k \hat{a}^{}_l ,$$ the equation of motion for the density operator is $$i\partial _{t}\hat{\rho}=\left[ \hat{h}(\hat{\rho})+\hat{f}(t),\hat{\rho}\right] ~, \label{E3.2}$$ In the linear approximation the density matrix is expanded $$\hat{\rho}(t)=\hat{\rho}^{(0)}+\delta \hat{\rho}(t)~, \label{E3.3}$$ where $\hat{\rho}^{(0)}$ is the stationary ground-state density. From the definition of the density matrix (\[E2.12\]), it follows that $\hat{\rho}\left( t\right) $ is a projector at all times, i.e. $\hat{\rho}\left( t\right) ^{2}=$ $\hat{\rho}\left( t\right) $. In particular, this means that the eigenvalues of $\hat{\rho}^{(0)}$ are 0 and 1. In the non-relativistic case particle states above the Fermi level correspond to the eigenvalue 0, and hole states in the Fermi sea correspond to the eigenvalue 1. In the relativistic case, one also has to take into account states from the Dirac sea. In the [*no-sea*]{} approximation these states are not occupied, i.e. they correspond to the eigenvalue 0 of the density matrix. We will work in the basis which diagonalizes $\hat{\rho}^{(0)}$ $$\rho _{kl}^{(0)}=\delta _{kl}\rho _{k}^{(0)}=\left\{ \begin{array}{ll} 0 & \text{for unoccupied states above the Fermi level (index }p\text{)} \\ 1 & \text{for occupied states in the Fermi sea (index }h\text{)\quad } \\ 0 & \text{for unoccupied states in the Dirac sea (index }\alpha \text{)} \end{array} \right. \label{E3.4}$$ Since $\hat{\rho}(t)$ is a projector at all times, in linear order $$\hat{\rho}^{(0)}\delta \hat{\rho}+\delta \hat{\rho}\hat{\rho}^{(0)}=\delta \hat{\rho}~. \label{E3.5}$$ This means that the non-vanishing matrix elements of $\delta \hat{\rho}$ are: $\delta \rho _{ph}$, $\delta \rho_{hp}$, $\delta \rho _{\alpha h}$, and $\delta \rho _{h\alpha }$. These are determined by the solution of the TD RMF equation (\[E3.2\]). In the linear approximation the equation of motion reduces to $$i\partial _{t}\delta \hat{\rho}=\left[ \hat{h}^{(0)},\delta \hat{\rho}\right] +\left[ \frac{\partial \hat{h}}{\partial \rho }\delta \rho ,\hat{\rho}^{(0)}\right] +\left[ \hat{f},\hat{\rho}^{(0)}\right] ~, \label{E3.6}$$ where $$\frac{\partial \hat{h}}{\partial \rho }\delta \rho =\sum_{ph}\frac{\partial \hat{h}}{\partial \rho _{ph}}\delta \rho _{ph}+\frac{\partial \hat{h}}{\partial \rho _{hp}}\delta \rho _{hp}+\sum_{\alpha h}\frac{\partial \hat{h}}{\partial \rho _{\alpha h}}\delta \rho _{\alpha h}+\frac{\partial \hat{h}}{\partial \rho _{h\alpha }}\delta \rho _{h\alpha }~. \label{E3.7}$$ In the small amplitude limit $\delta \rho$ will, of course, also display a harmonic time dependence e$^{-i\omega t}$. Taking into account the fact that $\hat{h}_{kl}^{(0)}=\delta _{kl}\epsilon _{k}$ is diagonal in the stationary basis, we obtain $$\begin{aligned} (\omega -\epsilon _{p}+\epsilon _{h})\delta \rho _{ph} &=&f_{ph}+\sum_{p^{\prime }h^{\prime }}V_{ph^{\prime }hp^{\prime }}\delta \rho _{p^{\prime }h^{\prime }}+V_{pp^{\prime }hh^{\prime }}\delta \rho _{h^{\prime }p^{\prime }}+\sum_{\alpha^{\prime }h^{\prime }}V_{ph^{\prime }h\alpha^{\prime }}\delta \rho _{\alpha^{\prime }h^{\prime }}+ V_{p\alpha^{\prime }hh^{\prime }} \delta \rho_{h^{\prime }\alpha^{\prime }} \nonumber \\ (\omega -\epsilon_{\alpha }+\epsilon _{h})\delta \rho _{\alpha h} &=&f_{\alpha h}+\sum_{p^{\prime }h^{\prime }}V_{\alpha h^{\prime }hp^{\prime }}\delta \rho _{p^{\prime }h^{\prime }}+V_{\alpha p^{\prime }hh^{\prime }}\delta \rho _{h^{\prime }p^{\prime }}+\sum_{\alpha ^{\prime }h^{\prime }}V_{\alpha h^{\prime }h\alpha ^{\prime }}\delta \rho _{\alpha ^{\prime }h^{\prime }}+V_{\alpha \alpha ^{\prime }hh^{\prime }}\delta \rho _{h^{\prime }\alpha ^{\prime }} \nonumber \\ (\omega -\epsilon _{h}+\epsilon _{p})\delta \rho _{hp} &=&f_{hp}+\sum_{p^{\prime }h^{\prime }}V_{hh^{\prime }pp^{\prime }}\delta \rho _{p^{\prime }h^{\prime }}+V_{hp^{\prime }ph^{\prime }}\delta \rho _{h^{\prime }p^{\prime }}+\sum_{\alpha ^{\prime }h^{\prime }}V_{hh^{\prime }p\alpha ^{\prime }}\delta \rho _{\alpha ^{\prime }h^{\prime }}+V_{h\alpha ^{\prime }ph^{\prime }}\delta \rho _{h^{\prime }\alpha ^{\prime }} \nonumber \\ (\omega -\epsilon _{h}+\epsilon_{\alpha })\delta \rho _{h\alpha } &=&f_{h\alpha }+\sum_{p^{\prime }h^{\prime }}V_{hh^{\prime } \alpha p^{\prime}}\delta \rho _{p^{\prime }h^{\prime }}+ V_{hp^{\prime }\alpha h^{\prime}} \delta \rho _{h^{\prime }p^{\prime }}+ \sum_{\alpha ^{\prime }h^{\prime}} V_{hh^{\prime }\alpha \alpha ^{\prime }} \delta \rho _{\alpha ^{\prime}h^{\prime }}+ V_{h\alpha ^{\prime }\alpha h^{\prime }} \delta \rho_{h^{\prime }\alpha ^{\prime }} \label{E3.8}\end{aligned}$$ or, in matrix form $$\left[ \omega \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) -\left( \begin{array}{cc} A & B \\ B^{\ast } & A^{\ast } \end{array} \right) \right] \left( \begin{array}{c} X \\ Y \end{array} \right) =\left( \begin{array}{c} F \\ \bar{F} \end{array} \right) ~, \label{E3.9}$$ The RRPA matrices $A$ and $B$ read $$\begin{aligned} A &=&\left( \begin{array}{cc} (\epsilon _{p}-\epsilon _{h})\delta _{pp^{\prime }}\delta _{hh^{\prime }} & \\ & (\epsilon _{\alpha }-\epsilon _{h})\delta _{\alpha \alpha ^{\prime }}\delta _{hh^{\prime }} \end{array} \right) +\left( \begin{array}{cc} V_{ph^{\prime }hp^{\prime }} & V_{ph^{\prime }h\alpha ^{\prime }} \\ V_{\alpha h^{\prime }hp^{\prime }} & V_{\alpha h^{\prime }h\alpha ^{\prime }} \end{array} \right) \label{E3.10} \\ B &=&\left( \begin{array}{cc} V_{pp^{\prime }hh^{\prime }} & V_{p\alpha ^{\prime }hh^{\prime }} \\ V_{\alpha p^{\prime }hh^{\prime }} & V_{\alpha \alpha ^{\prime }hh^{\prime }} \end{array} \right) \label{E3.11}\end{aligned}$$ and the amplitudes $X$ and $Y$ are defined $$X=\left( \begin{array}{c} \delta \rho _{ph} \\ \delta \rho _{\alpha h} \end{array} \right) ,\quad Y=\left( \begin{array}{c} \delta \rho _{hp} \\ \delta \rho _{h\alpha } \end{array} \right) ~. \label{E3.12}$$ The vectors which represent the external field contain the matrix elements $$F=\left( \begin{array}{c} f_{ph} \\ f_{\alpha h} \end{array} \right) ,\quad \bar{F}=\left( \begin{array}{c} f_{hp} \\ f_{h\alpha } \end{array} \right) ~. \label{E3.13}$$ In conventional linear response theory (see, e.g., Ref. [@RS.80]) the polarization function $\Pi _{pqp^{\prime }q^{\prime }}(\omega )$ is defined by the response of the density matrix to an external field with a harmonic time dependence $$\delta \rho _{pq}=\sum_{p^{\prime }q^{\prime }}\Pi _{pqp^{\prime }q^{\prime }}(\omega )\,f_{p^{\prime }q^{\prime }}~. \label{E3.14}$$ Its spectral representation reads $$\Pi _{pqp^{\prime }q^{\prime }}(\omega )=\sum_{\mu }\frac{\langle 0|\psi _{q^{{}}}^{\dagger }\psi _{p^{{}}}^{{}}|\mu \rangle \langle \mu |\psi _{p^{\prime }}^{\dagger }\psi _{q^{\prime }}^{{}}|0\rangle }{\omega -E_{\mu }+E_{0}+i\eta }-\frac{\langle 0|\psi _{p^{\prime }}^{\dagger }\psi _{q^{\prime }}^{{}}|\mu \rangle \langle \mu |\psi _{q^{{}}}^{\dagger }\psi _{p^{{}}}^{{}}|0\rangle }{\omega +E_{\mu }-E_{0}+i\eta }~, \label{E3.15}$$ where the index $\mu $ runs over all excited states $\mu \rangle $ with energy $E_{\mu }$. In the RPA approximation the polarization function is obtained by inverting the matrix $$\Pi (\omega )=\left[ \left( \begin{array}{cc} \omega +i\eta & 0 \\ 0 & -\omega -i\eta \end{array} \right) -\left( \begin{array}{cc} A & B \\ B^{\ast } & A^{\ast } \end{array} \right) \right] ^{-1}~. \label{E3.16}$$ $\Pi (\omega )$ is the solution of the linearized Bethe-Salpeter equation $$\Pi (\omega )=\Pi ^{0}(\omega )+\Pi ^{0}(\omega )V\,\Pi (\omega )~, \label{E3.17}$$ where the free polarization function is given by $$\Pi _{klk^{\prime }l^{\prime }}^{0}(\omega )=\frac{\rho _{l}^{(0)}-\rho _{k}^{(0)}}{\omega -\epsilon _{k}+\epsilon _{l}+i\eta }\delta _{kk^{\prime }}\delta _{ll^{\prime }}~. \label{E3.18}$$ The eigenmodes of the system are determined by the RPA equation $$\left( \begin{array}{cc} A & B \\ -B^{\ast } & -A^{\ast } \end{array} \right) \left( \begin{array}{c} X \\ Y \end{array} \right) _{\mu }=\left( \begin{array}{c} X \\ Y \end{array} \right) _{\mu }\Omega _{\mu }~. \label{E3.19}$$ In principle, this is a non-Hermitian eigenvalue problem. In the non relativistic case, however, it can be reduced to a Hermitian problem of half dimension, if the RPA matrices are real and if $(A+B)$ is positive definite. In this case one can also show that the eigenvalues $\Omega _{\mu }^{2}$ are positive, i.e., the RPA eigenfrequencies $\Omega _{\mu }$ are real (see [@RS.80]). The relativistic case is much more complicated. From Eq. (\[E3.10\]) we notice that the matrix $(A+B)$ is not positive definite. The $\alpha h$ configurations have large negative diagonal matrix elements $\epsilon _{\alpha h}=\epsilon _{\alpha}-\epsilon _{h}\le -1.2$ GeV, and the RRPA equation can no longer be reduced to a Hermitian problem of half dimension. In this case it is also not clear whether the eigenfrequencies are necessarily real, because the stability matrix $${\cal S}=\left( \begin{array}{cc} A & B \\ B^{\ast } & A^{\ast } \end{array} \right) \label{E3.20}$$ is no longer positive definite. Rather than minima, the solutions of the RMF equations are saddle points[@Providencia] in the multi-dimensional energy surface, and the Thouless theorem [@Th.61], which states that a positive definite stability matrix ${\cal \ S}$ leads to a stable RPA equation with real frequencies, does not apply. However, the opposite is not true: if the stability matrix is not positive definite, it does not automatically follow that the eigenvalues of the corresponding RPA matrix are not real. In fact, cases like this occur also in the non relativistic RPA in the neighborhood of phase transitions, where the interaction $V$ is very large and attractive. The positive energies $\varepsilon_p - \varepsilon_h$ on the diagonal of the stability matrix are not large enough, as compared to the matrix elements of $V$, to guarantee positive eigenvalues of ${\cal S}$. In the relativistic case the energies on the diagonal $\varepsilon_\alpha - \varepsilon_ h$ are negative. Even for small matrix elements of V the stability matrix ${\cal S}$ will have negative eigenvalues. However, as long as the diagonal part dominates, i.e. as long as we are not in a neighborhood of a phase transition, the RRPA eigenfrequencies are real. This can be easily demonstrated if instead of the RPA amplitudes $X$ and $Y$, we define the generalized coordinates $Q$ and momenta $P$ $$Q=\frac{1}{\sqrt{2}}(X-Y^{\ast }),\;\;\;\;\;\;\;P=\frac{i}{\sqrt{2}}(X+Y^{\ast })~. \label{E3.21}$$ In the small amplitude limit the time-dependent mean field equations take the form of classical Hamiltonian equations (for details see Ref. [@RS.80], Chapt. 12) for the Hamiltonian function $${\cal H}(P,Q)=\frac{1}{2}\left( \begin{array}{cc} P^{\ast } & -P \end{array} \right) {\cal M}^{-1}\left( \begin{array}{c} P \\ -P^{\ast } \end{array} \right) +\frac{1}{2}\left( \begin{array}{cc} Q^{\ast } & Q \end{array} \right) {\cal S}\left( \begin{array}{c} Q \\ Q^{\ast } \end{array} \right), \label{E3.22}$$ with the inertia tensor $${\cal M}=\left( \begin{array}{cc} A & -B \\ -B^{\ast } & A^{\ast } \end{array} \right) ^{-1}~. \label{E3.23}$$ The large negative diagonal matrix elements are also present in the inertia tensor. If the off-diagonal matrix elements are not too large, a negative inertia and a negative curvature will again result in real frequencies. In all applications of RRPA we have found real frequencies, though in none of these cases the stability matrix ${\cal S}$ was positive definite. This also explain why the time-dependent RMF equations have stable solutions which describe oscillations with real frequencies around the static solution, although the static solution itself corresponds to a saddle point. The solution of the RPA equations in configuration space is much more complicated in the relativistic case. Firstly, because in addition to the usual $ph$-states, the configuration space includes a large number of $\alpha h$-states. A further complication arises because the full non-Hermitian RPA matrix has to be diagonalized, even in cases when the matrix elements are real. The usual method [@RS.80], which reduces the dimension of the RPA equations by half does not apply. Summarizing the results of this section, we have shown that the relativistic RPA represents the small amplitude limit of the time-dependent RMF theory. However, because the RMF theory is based on the [*no-sea*]{} approximation, the RRPA configuration space includes not only the ususal $ph$-states, but also $\alpha h$-configurations, i.e. pairs formed from occupied states in the Fermi sea and empty negative-energy states in the Dirac sea. At each time $t\neq 0$ the occupied positive energy states can have non-vanishing overlap with both positive and negative energy solutions calculated at $t=0$. If the density matrix $\hat{\rho}(t)$ is represented in the basis which diagonalizes the static solution $\hat{\rho}^{(0)}$, it contains not only the usual components $\delta \hat{\rho}_{ph}$ with a particle above the Fermi level and a hole in the Fermi sea, but also components $\delta \hat{\rho}_{\alpha h}$ with a particle in the Dirac sea and a hole in the Fermi sea. One of the important advantages of using the time-dependent variational approach is that it conserves symmetries. It is well known from non-relativistic time-dependent mean field theory that symmetries are connected with zero energy solutions of the RPA, i.e. the Goldstone modes, and it is one of the advantages of RPA that it restores the symmetries broken by the mean field. This has already been realized in the early studies of symmetry conservation in RRPA, and it has been emphasized by Dawson and Furnstahl in Ref. [@DF.90], that it is essential to include the $\alpha h$ configuration space in order to bring the Goldstone modes to zero energy. However, it was not anticipated that negative energy states in the RRPA configuration space could have dramatic effects on the excitation energies of giant resonances, as we will show in the following sections. It is not obvious that basis states with unperturbed energies more than 1.2 GeV below the Fermi energy, can have a big influence on giant resonances with excitation energies in the MeV region. In the following section we will study a simple model which provides a deeper insight into this problem. A separable model ================= The model studied in this section represents a relativistic extension of the Brown and Bolsterli model[@BB.59], which has played an essential role in the understanding of the microscopic picture of collective excitations. The single-particle basis consists of 4 states, each of them $\Omega $-fold degenerate ($\nu =1, \ldots \Omega $). The first two states (1 and 2) correspond to particle levels with the free mass $m$, the states 3 and 4 correspond to the negative energy levels with free mass $-m$. The model Hamiltonian reads $$H=H_{0}-\lambda _{s}^{{}}S^{\dagger }S+\lambda _{v}^{{}}V^{\dagger }V~, \label{E4.1}$$ where $H_0$ is the Hamiltonian which describes free Dirac particles $$H_{0}=\sum_{i=1}^{A} \Big(\bf{\alpha}{\bf p}_{i }+ \beta m_{i }+ \frac{1}{2} \sigma \varepsilon_{i }\Big)~, \label{E4.2}$$ with $$\alpha =\left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right) ,\quad \beta =\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right) ,\quad \sigma =\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right) \label{E4.3}$$ In Eq.(\[E4.2\]) $m_i = m$ is the free mass of particle $i$, $p_i = p$ denotes its momentum, and $\varepsilon_{i} = \varepsilon_{0} \ll m$ induces a small splitting between the levels 1 and 2 (and, of course, between 3 and 4). The interaction consists of an attractive scalar field $S$ and a repulsive vector field $V$ $$S=\sum_{i=1}^{A}\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \end{array} \right) _{i },\quad \quad V=\sum_{i=1}^{A}\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) _{i}~, \label{E4.4}$$ with the strength parameters $\lambda _{s}$ and $\lambda _{v}.$ In the formalism of second quantization the operators $H_{0\text{,}}$ $S$ and $V$ take the forms $$\begin{aligned} H_{0} &=&p\sum_{\nu }c_{1\nu }^{\dagger }c_{3\nu }^{{}}+c_{2\nu }^{\dagger }c_{4\nu }^{{}}+h.c. \nonumber \\ && + m\sum_{\nu }c_{1\nu }^{\dagger }c_{1\nu }^{{}}+c_{2\nu }^{\dagger }c_{2\nu }^{{}}-c_{3\nu }^{\dagger }c_{3\nu }^{{}}-c_{4\nu }^{\dagger }c_{4\nu }^{{}} \nonumber \\ && + \frac{\varepsilon _{0}}{2}\sum_{\nu }c_{1\nu }^{\dagger }c_{1\nu }^{{}}-c_{2\nu }^{\dagger }c_{2\nu }^{{}}+c_{3\nu }^{\dagger }c_{3\nu }^{{}}-c_{4\nu }^{\dagger }c_{4\nu }^{{}}~, \label{E4.5} \\ S &=&\sum_{\nu }c_{1\nu }^{\dagger }c_{2\nu }^{{}}-c_{3\nu }^{\dagger }c_{4\nu }^{{}}+h.c.~, \label{E4.6} \\ V &=&\sum_{\nu }c_{1\nu }^{\dagger }c_{2\nu }^{{}}+c_{3\nu }^{\dagger }c_{4\nu }^{{}}+h.c. \label{E4.7}\end{aligned}$$ At the mean field level the diagonalization of the Dirac operator $$H_{0}=\left( \begin{array}{cccc} m+\frac{\varepsilon _{0}}{2} & 0 & p & 0 \\ 0 & m-\frac{\varepsilon _{0}}{2} & 0 & p \\ p & 0 & -m+\frac{\varepsilon _{0}}{2} & 0 \\ 0 & p & 0 & -m-\frac{\varepsilon _{0}}{2} \end{array} \right) ~, \label{E4.10}$$ result in the eigenvalues $$\epsilon _{p}=E+\frac{\varepsilon _{0}}{2},\quad \quad \epsilon _{h}=E-\frac{\varepsilon _{0}}{2},\quad \quad \epsilon _{\alpha }=-E+\frac{\varepsilon _{0}}{2},\quad \quad \epsilon _{\alpha ^{\prime }}=-E-\frac{\varepsilon _{0}}{2}, \label{E4.12}$$ and the corresponding eigenvectors are $$\psi _{p}=\left( \begin{array}{c} f \\ 0 \\ g \\ 0 \end{array} \right) ,\,\,\,\,\psi _{h}=\left( \begin{array}{c} 0 \\ f \\ 0 \\ g \end{array} \right) ,\,\,\,\,\,\psi _{\alpha }=\left( \begin{array}{c} -g \\ 0 \\ f \\ 0 \end{array} \right) ,\,\,\,\,\,\,\psi _{\alpha ^{\prime }}=\left( \begin{array}{c} 0 \\ -g \\ 0 \\ f \end{array} \right) ~, \label{E4.13}$$ respectively. We use the notation $$E=\sqrt{p^{2}+m^{2},},\,\quad \quad \quad f=\cos \frac{\phi }{2},\,\quad \quad \quad g=\sin \frac{\phi }{2}~, \label{E4.14}$$ with $$\tan \frac{\phi }{2}=\frac{p}{m+E}~. \label{E4.16}$$ A realistic choice of single particle energies is $$\begin{aligned} \epsilon _{ph} &=&\epsilon _{p}-\epsilon _{h}=\varepsilon _{0}\approx 10\text{ MeV}~, \nonumber \\ \epsilon _{\alpha h} &=&\epsilon _{\alpha }-\epsilon _{h}=-2E+\varepsilon _{0}\simeq -2\text{ GeV}~, \nonumber \\ \epsilon _{\alpha ^{\prime }h} &=&\epsilon _{\alpha ^{\prime }}-\epsilon _{h}=-2E\simeq -2\text{ GeV}~. \label{E4.18}\end{aligned}$$ In realistic calculations the ratio between large and small components of the Dirac spinors is approximately $f/g\approx 30$, i.e., $\phi \simeq 33^{0}$. In the basis (\[E4.13\]) the matrices of the operators $S$ and $V$ are $$S=\sum_{i}\left( \begin{array}{cccc} 0 & \cos \phi & 0 & -\sin \phi \\ \cos \phi & 0 & -\sin \phi & 0 \\ 0 & -\sin \phi & 0 & -\cos \phi \\ -\sin \phi & 0 & -\cos \phi & 0 \end{array} \right) _{i}~, \label{E4.19}$$ $$V=\sum_{i}\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) _{i}~. \label{E4.20}$$ We notice the essential matrix elements $$\begin{array}{ll} S_{ph}=\cos \phi & V_{ph}=1 \\ S_{\alpha h}=-\sin \phi & V_{\alpha h}=0 \\ S_{\alpha ^{\prime }h}=0 & V_{\alpha ^{\prime }h}=0~. \end{array} \label{E4.21}$$ In analogy to the Brown-Bolsterli model, the unperturbed polarization function is $$\Pi _{FF^{\prime }}^{0}(\omega )=\frac{2\varepsilon _{0}^{{}}\Omega F_{ph}^{{}}F_{ph}^{\prime }}{\omega _{{}}^{2}-\varepsilon _{0}^{2}+i\eta }+\frac{2\varepsilon _{\alpha h}^{{}}\Omega F_{\alpha h}^{{}}F_{\alpha h}^{\prime }}{\omega _{{}}^{2}-\varepsilon _{\alpha h}^{2}+i\eta }~, \label{E4.22}$$ where the operators $F,$ $F^{\prime }$ $\in$ {$S,\ V$}. In particular, $$\begin{aligned} \Pi _{SS}^{0}(\omega ) &=&\frac{2\varepsilon _{0}^{{}}\Omega \cos ^{2}\phi }{\omega _{{}}^{2}-\varepsilon _{0}^{2}+i\eta }-\frac{2E\Omega \sin ^{2}\phi }{\omega _{{}}^{2}-E_{{}}^{2}+i\eta }~, \label{E4.23} \\ \Pi _{VV}^{0}(\omega ) &=&\frac{2\varepsilon _{0}^{{}}\Omega }{\omega _{{}}^{2}-\varepsilon _{0}^{2}+i\eta }~, \label{E4.24} \\ \Pi _{VS}^{0}(\omega ) &=&\Pi _{VS}^{0}(\omega )=\frac{2\varepsilon _{0}^{{}}\Omega \cos \phi }{\omega _{{}}^{2}-\varepsilon _{0}^{2}+i\eta }~. \label{E4.25}\end{aligned}$$ The RRPA frequencies are the roots of the determinant $$\det \left( 1- \begin{array}{cc} -\Pi _{SS}^{0}(\omega )\lambda _{s} & \Pi _{SV}^{0}(\omega )\lambda _{v} \\ -\Pi _{SV}^{0}(\omega )\lambda _{s} & \Pi _{VV}^{0}(\omega )\lambda _{v} \end{array} \right) =0 \label{E4.26}$$ The essential difference with respect to the non-relativistic Brown-Bolsterli model is the additional term $2E\Omega \sin ^{2}\phi /(\omega ^{2}-E^{2}+i\eta )$ in the scalar polarization $\Pi _{SS}^{0}(\omega )$. Without this term (i.e. $\phi =0)$, the eigenfrequencies would be determined by $$\omega _{{}}^{2}=\varepsilon _{0}^{2}-2\varepsilon _{0}^{{}}\Omega (\lambda _{s}-\lambda _{v})~, \label{E4.28}$$ with the usual cancellation of scalar and vector interactions. With the additional term, states with unperturbed energies at $\simeq -2E$ are included in the RPA configuration space. The interaction between these states and the $ph$-states is determined by the matrix elements of the scalar interaction $$v_{\alpha h^{\prime }hp}=-\lambda _{s}\cos \phi \sin \phi ~. \label{E4.29}$$ These matrix elements are not reduced by a similar term coming from the vector interaction. In our simplified model this vector-induced term vanishes as a consequence of the relativistic structure of the equations: while for a state from the Fermi sea the large component is the upper component of the spinor, a state from the Dirac sea has a large lower component of the spinor. Due to the $\gamma$-matrix structure of the vertex, the matrix elements of the vector interaction vanish. In realistic calculations these matrix elements do not vanish identically, but they are reduced by an order of magnitude as compared to the corresponding scalar terms. At excitation energies in the MeV region, $\omega \ll E$ in the denominator of the second term of Eq.(\[E4.23\]), and we obtain an energy-independent term $(2\Omega \lambda_{s}\sin ^{2}\phi )/E$ in the dispersion relation. The eigenfrequencies are now determined by $$\omega _{{}}^{2}=\varepsilon _{0}^{2}-\varepsilon _{0}^{{}}2\Omega (\lambda _{s}-\lambda _{v})+\varepsilon _{0}^{{}}2\Omega \lambda _{s}\sin ^{2}\phi \frac{E+2\Omega \lambda _{v}}{E+2\Omega \lambda _{s}\sin ^{2}\phi }~, \label{E4.30}$$ i.e. we find an additional repulsion for the collective state. An illustrative case: the giant monopole resonance ================================================== The RPA equations can be solved either by diagonalizing the RPA matrix in configuration space (see Eq.(\[E3.19\])), or the response function can be calculated by solving the Bethe-Salpeter equation (\[E3.17\]) in momentum space[@HG.89; @MGT.97]. In both cases, of course, one first has to determine the single-nucleon spinors and the mean-fields which correspond to the stationary solution for the ground-state. The Dirac-Hartree equations and the equations for the meson fields are solved self-consistently in the mean-field approximation. The eigenvalue problem is solved, for instance, by diagonalization in a spherically symmetric harmonic oscillator basis [@GRT.90]. From the spectrum of single-nucleon states the RPA configuration space is built: particle-hole ($ph$) and antiparticle-hole ($\alpha h$) pairs which obey the selection rules for angular momentum, parity and isospin. The number of basis states is also determined by two cut-off parameters: the maximal $ph$-energy ($\epsilon _{m}-\epsilon _{i}<E_{\max }$) and the minimal $\alpha h$-energy ($\epsilon _{\alpha}-\epsilon _{i}>E_{\min }$). With this basis the RPA matrix is calculated for the same effective interaction that determines the ground-state, or the free polarization function $\Pi ^{(0)}$ is calculated in the response function method. Both methods require that the single-particle continuum is discretized. In order to smooth out the RPA strength function, the discrete strength distribution is folded by a Lorentzian of width $\Gamma $. In the response function method the folding is automatic if a finite value parameter $i\Gamma $ is used in the denominators of Eqs.(\[E3.15\]-\[E3.18\]), instead of the infinitesimal parameter $i\eta $. We have verified that identical results are obtained with both methods. The large effect of Dirac sea states on isoscalar strength distributions is illustrated in Fig. 1, where we display the isoscalar monopole RRPA strength in $^{116}$Sn calculated with the NL3 effective interaction [@LKR.97] and the width of the Lorentzian is $\Gamma $ = 2 MeV. Recent experimental data are available for the isoscalar giant monopole resonance in $^{116}$Sn [@YCL.99]. The solid curve represents the full RRPA strength and it displays a pronounced peak at 16 MeV, in excellent agreement with the measured value of 15.9 MeV[@YCL.99]. Giant monopole resonances in spherical nuclei are in best agreement with experimental data, when calculated with effective Lagrangians with a nuclear matter compression modulus in the range 250-270 MeV [@VLB.97; @VWR.00; @MGW.00]. The nuclear matter incompressibility of the NL3 effective interaction is 272 MeV. The long-dashed curve in Fig. 1 corresponds to the to the case with no ${\alpha}h$ pairs in the RRPA configuration space. We notice that, without the contribution from Dirac sea states, the strength distribution is shifted to lower energy. The position of the peak is shifted from $\approx 16$ MeV to below 10 MeV if ${\alpha}h$ pairs are not included in the RRPA basis. Quantitatively similar results are also obtained with other effective interactions. In Fig. 1 we have also separated the contributions of vector and scalar mesons to the ${\alpha}h$ matrix elements. The dash-dot-dot (dash-dot) curve corresponds to calculations in which only vector mesons (scalar mesons) were included in the coupling between the Fermi sea and Dirac sea states. Both interactions were included in the positive energy particle-hole matrix elements. The resulting strength distributions nicely illustrate the dominant contribution of the isoscalar scalar sigma meson to the ${\alpha}h$ matrix elements, in complete agreement with the result obtained in the previous section for the schematic Brown-Bolsterli model. It is also interesting to examine the effect of the coupling via the spatial components of the vector meson fields, i.e. the term $-{{\mbox{\boldmath $\alpha$}}}^{(1)}{{\mbox{\boldmath $\alpha$}}}^{(2)}$ in the interaction of Eq. (\[E2.16\]). In time-dependent calculations this coupling results from the nucleon currents. In Fig.2 we display the isoscalar monopole RRPA strength in $^{116}$Sn calculated as follows: a) full RRPA (solid curve); b) without the matrix elements of the spatial components of the vector meson fields (dot-dashed curve); c) without the contribution of the Dirac sea to the matrix elements of the spatial components of the vector meson fields (dashed curve); and d) the free Hartree response function. The currents do not contribute to the static polarizability and to the $M_{-1}$ moment. At finite frequencies, however, their contribution is attractive and it lowers the ISGMR energy by $\approx 2$ MeV.. Therefore, if the contribution of the spatial components of the vector fields is neglected, a better agreement with experimental values would be obtained with a lower nuclear matter incompressibility: $K_{\infty }\simeq 230$ MeV. Incidentally, this lower value for the nuclear matter incompressibility is the one advocated by non-relativistic RPA calculations [@BBD.95; @CGB.00]. It has already been noted in time-dependent RMF calculations [@VLB.97], as well as in recent relativistic RPA studies [@MGW.00], that effective interactions which reproduce the IS GMR excitation energies in finite nuclei have a somewhat higher nuclear matter incompressibility than the corresponding non-relativistic Skyrme or Gogny interactions. Here we point to a possible solution to this puzzle: the current terms in the matrix elements of the particle-hole interaction (\[E3.10\],\[E3.11\]) are given by $$\langle p||j_{1}(kr)[{{\mbox{\boldmath $\alpha$}}}Y_{1}({\bf \hat{r}})]_{J=0}||h\rangle =\langle p||j_{1}(kr)\left( \begin{array}{cc} 0 & [{{\mbox{\boldmath $\sigma$}}}Y_{1}]_{J=0} \\ \lbrack {{\mbox{\boldmath $\sigma$}}}Y_{1}]_{J=0} & \end{array} \right) ||h\rangle~, \label{E5.1}$$ which is a typical relativistic term because it couples large and small components of a Dirac spinor. Since they change parity, terms of the type $[{\bf \sigma }Y_{1}]_{J=0}$ cannot contribute to the giant monopole resonance in a non-relativistic calculation. Conclusions =========== In the last couple of years, several discrepancies have been reported between the results obtained with the Relativistic Random Phase approximation and the Time-Dependent Relativistic Mean Field theory, when applied to the the description of small amplitude collective motion in atomic nuclei. In order to resolve this puzzle, in the present work we have derived the RRPA from the TDRMF equations in the limit of small amplitude motion. The relativistic single particle density matrix $\hat{\rho}(t)$ has been expanded in terms of the stationary solutions of the ground-state. We have shown that the [*no-sea*]{} approximation, which is essential for practical application of the RMF theory in finite nuclei, leads to a fundamental difference between the relativistic and non-relativistic approaches. While in the non-relativistic case the time-dependent variation of the density $\delta \hat{\rho}(t)=\hat{\rho}(t)-\hat{\rho}^{(0)}$ has only $ph$-matrix elements (particle ($p$) above the Fermi surface, hole ($h$) in the Fermi sea), in the relativistic case $\delta \hat{\rho}$ contains also $\alpha h$-matrix elements, where $\alpha$ denotes unoccupied states in the Dirac sea. The fact that states in the Dirac sea can be occupied is a direct consequence of the [*no-sea*]{} approximation. In constructing the matrix $\delta \hat{\rho}$ one has to take into account that a complete basis of single particle states contains both positive and negative energy solutions of the Dirac equation. Already in Ref. [@DF.90] it has been shown that an RRPA calculation, consistent with the mean-field model in the $no-sea$ approximation, necessitates configuration spaces that include both particle-hole pairs and pairs formed from occupied states and negative-energy states. The contributions from configurations built from occupied positive-energy states and negative-energy states are essential for current conservation and the decoupling of the spurious state. What is less obvious, however, is that the inclusion of negative-energy single particle states in the RRPA configuration space has such a dramatic effect on the calculated excitation energies of isoscalar giant resonances. In a schematic model we have shown that, due to the relativistic structure of the RPA equations, the matrix elements of the time-like component of the vector meson fields which couple the $\alpha h$-configurations with the $ph$-configurations vanish. In realistic calculations these matrix elements do not vanish identically, but they are strongly reduced with respect to the corresponding matrix elements of the isoscalar scalar meson field. As a result, the well known cancellation between the contributions of the $\sigma $ and $\omega $ fields, which, for instance, leads to ground-state solution, does not take place and we find large matrix elements coupling the $\alpha h$-sector with the $ph$-configurations. In addition, the number of $\alpha h$-configurations which can couple to the $ph$-configurations in the neighborhood of the Fermi surface is much larger than the number of $ph$-configurations. This can increase the effect by enhancing the collectivity of the $\alpha h$-configuration space. The large effect of Dirac sea states on isoscalar strength distributions has been illustrated for the giant monopole resonance in $^{116}$Sn. We have also shown that currents cannot be neglected in the calculation of giant resonances. Of course they do not occur in the static case, i.e. in the calculations of the static polarizability or the $M_{-1}$ moment. At finite frequencies, however, time reversal invariance is broken and spatial components of the vector meson fields play an important role. This effect is known as [*nuclear magnetism*]{}. It is a genuine relativistic effect, because the matrix elements couple the large and small components of a Dirac spinor. Since the spatial components of the vector fields have the form $-{{\mbox{\boldmath $\alpha$}}}^{(1)}{{\mbox{\boldmath $\alpha$}}}^{(2)}$ $D_{\omega }({\bf r}_{1},{\bf r}_{2})$, they result in an attractive contribution which lowers the value of the calculated excitation energies of giant resonances. This explains the difference between the non-relativistic and relativistic RPA results for the isoscalar giant monopole resonances in spherical nuclei, and the corresponding predictions for the nuclear matter compression modulus. [**ACKNOWLEDGMENTS**]{} P.R. acknowledges the support and the hospitality extended to him during his stay at the IPN-Orsay, where a large part of this work was completed. This work has been supported in part by the Bundesministerium für Bildung und Forschung under the project 06 TM 979 and by the Deutsche Forschungsgemeinschaft. It was also partially supported by the National Natural Science Foundation of China under grant No. 19847002, 19835010-10075080 and Major State Basic Research Development Program under contract No. G200077407. B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. [**16**]{}, 1 (1986). P.-G. Reinhard, Rep. Progr. Phys. [**52**]{}, 439 (1989). B.D. Serot, Rep. Prog. Phys. [**55**]{}, 1855 (1992). P. Ring, Progr. Part. Nucl. Phys. [**37**]{}, 193 (I996). C.J. Horowitz and B.D. Serot, Nucl. Phys. [**A399**]{}, 529 (1983). A. Bouyssy, J.F. Mathiot, N. Van Giai, and S. Marcos, Phys. Rev. [**C36**]{}, 380 (1987). P. Bernados, V.N. Fomenko, N. Van Giai, M.L. Quelle, S. Marcos, R. Niembro, and L.N. Savushkin, Phys. Rev. [**C48**]{}, 2665 (1993). C.J. Horowitz and B.D. Serot, Phys. Lett. [**140B**]{}, 181 (1984). D.A. Wasson, Nucl. Phys. [**A535**]{}, 456 (1991). J. Boguta and A. R. Bodmer, Nucl. Phys. [**A292**]{}, 413 (1977). W. Koepf and P. Ring, Nucl. Phys. [**A493**]{}, 61 (1989). A.V. Afanasjev, J. König, and P. Ring, Nucl. Phys. [**A608**]{}, 107 (1996). A.V. Afanasjev, G.A. Lalazissis, and P. Ring, Nucl. Phys. [**A634**]{}, 395 (1998). A.V. Afanasjev, J. König, and P. Ring, Phys. Rev. [**C60**]{}, 051303 (1999). D. Vretenar, H. Berghammer, and P. Ring, Nucl. Phys. [**A581**]{}, 679 (1995). B. Podobnik, D. Vretenar, and P. Ring, Z. Phys. [**A354**]{}, 375 (1996). D. Vretenar, G.A. Lalazissis, R. Behnsch, W. Pöschl, and P. Ring, Nucl. Phys. [**A621**]{}, 853 (1997). R.J. Furnstahl, Phys. Lett. [**152B**]{}, 313 (1985). M. L’Huillier and N. Van Giai, Phys. Rev. [**C39**]{}, 2022 (1989). P.G. Blunden and M. McCorquodale, Phys. Rev. [**C38**]{}, 1861 (1988). J.F. Dawson and R.J. Furnstahl, Phys. Rev. [**C42**]{}, 2009 (1990). C.J. Horowitz and J. Piekarewicz, Nucl. Phys. [**A511**]{}, 461 (1990). Z.Y. Ma, N. Van Giai, H. Toki, and M. L’Huillier, Phys. Rev. [**C42**]{}, 2385 (1997). Z.Y. Ma, H. Toki, and N. Van Giai, Nucl. Phys. [**A627**]{}, 1 (1997). N. Van Giai, Z.Y. Ma, H. Toki, and B.Q. Chen, Nucl. Phys. [**A649**]{}, 37c (1999). D. Vretenar, P. Ring, G.A. Lalazissis, and N. Paar, Nucl. Phys. [**A649**]{}, 29c (1999). P. Ring and P. Schuck, [*The Nuclear Many-Body Problem*]{}, Springer Verlag, New York 1980. D. Vretenar, A. Wandelt, and P. Ring, Phys. Lett. [**B487**]{}, 334 (2000). Z.Y. Ma, N. Van Giai, A. Wandelt, D. Vretenar, and P. Ring, Nucl. Phys. [**A 685**]{} (2001) (in press). P. Ring and J. Speth, Nucl. Phys. [**A235**]{}, 315 (1974). G.E. Brown and M. Bolsterli, Phys. Rev. Lett. [**3**]{}, 472 (1959). T. Kohmura, Y. Miyama, T. Nagai, S. Ohnaka, J. da Providencia, and T. Kodama, Phys. Lett.[**B 226**]{}, 207 (1989). D.J. Thouless, Nucl. Phys. [**22**]{}, 78 (1961). Y.K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. (N.Y.) [**198**]{}, 132 (1990). D.H. Youngblood, H.L. Clark, and Y.W. Lui, Phys. Rev. Lett. [**82**]{}, 691 (1999). G.A. Lalazissis, J. König, and P. Ring, Phys. Rev. [**C 55**]{}, 540 (1997). J.P. Blaizot, J.F. Berger, J. Dechargé, and M. Girod, Nucl. Phys. [**A 591**]{}, 435 (1995). G. Colò, N. Van Giai, P.F. Bortignon, and M.R. Quaglia, Phys. Lett. [**B 485**]{}, 362 (2000). ![ISGMR strength distributions in $^{116}$Sn calculated with the NL3 effective interaction. The solid and long-dashed curves are the RRPA strengths with and without the inclusion of Dirac sea states, respectively. The dash-dot-dot (dash-dot) curve corresponds to calculations in which only vector mesons (scalar mesons) are included in the coupling between the Fermi sea and Dirac sea states.[]{data-label="Fig.1"}](ringetal_fig1.eps) ![Effects of nuclear magnetism on the IS GMR strength distribution in $^{116}$Sn. Solid curve: full RRPA calculation; dash-dotted curve: without the matrix elements of the spatial components of the vector meson fields; dashed curve: without the contribution of the Dirac sea to the matrix elements of the spatial components of the vector meson fields. The free Hartree response is also displayed.[]{data-label="Fig.2"}](ringetal_fig2.eps) [^1]: also Institute of Theoretical Physics, Beijing, P.R. of China [^2]: On leave from University of Zagreb, Croatia
{ "pile_set_name": "ArXiv" }
How concussions are changing high school football. Share HIDALGO COUNTY, TEXAS (KVEO NEWSCENTER 23) — Things are different for high school football coaches in the Rio Grande Valley. "It's changed dramatically in regards to awareness," said Sharyland head coach Ron Adame. "As coaches, being aware that concussions often times can lead to long-term damage. Those days, the real common word that was used was just you got your bell rung. You just go shake it off or give him some smelling salts. Those days are gone." The University Interscholastic League, the governing body for high school sports in Texas, began mandating in 2011 that all coaches take a two-hour training course on how to spot concussion symptoms. Any player exhibiting dilated pupils, light sensitivity or dizzyness after a tough hit must leave the game. Any player guilty of targeting another player's head is automatically kicked out. Players who suffer a concussion can't return to practice until a doctor clears it. Even then, they must ease into full practice and contact. All those changes are a 180 from the mindset of coaches like Mission's Mario Peña, who has been walking sidelines for 35 years. "Back in the day, it was different," Peña said. "Different because one, we didn't have trainers. Back then your coaches were the trainers… everything that has changed has been directed at the safety of the football player." As recently as the 1990s, coaches made players earn water breaks during practice. And practices would last well after dusk. Nowadays coaches are restricted to eight hours of practice a week and only 90 minutes of full, game-speed contact. Mission's athletic department had 23 concussions in 2011, the great majority of them in football. Sharyland has had two concussions this year, none of them at the varsity level. The gridiron is a tough place to play. Toughness and durability have always been celebrated, which leads to another challenge: players consider themselves to be warriors who ignore and play through pain. Coaches want to win ball games. And each must go against what they've been conditioned to believe... for the sake of safety. "That's number one, educate our kids on the symptoms, don't try to be macho, don't try to be, I don't want to leave the team, so I don't want to tell the coach anything," Peña said. "One common practice is to take their helmet away, to hide it from them," Adame said. "Otherwise the tendency is for some of these individuals to try to want to get back into the game. You can't get in and play if you don't have your helmet." Schools have clearly defined roles for coaches and trainers when a player is hurt; adding another hurdle to return to games for the sake of safety. "Ultimately, if there's an injury, I refer that kid to our trainer. I have a standing rule between me and my trainer. Don't tell me how to coach and I won't tell you how to do your job." The concussion problem swelled up because nobody understood the risks. Empirical scientific evidence has changed that. These days, it's all about minimizing the damage from America's most popular sport. "Football is and always will be a contact sport; some would call it violent," Adame said. "Nowadays, participants seem to be getting bigger and stronger and faster… I wouldn't try to dissuade them from playing, as long as they know the risks involved in playing the sport." Spaces are allowed; punctuation is not allowed except for periods, hyphens, and underscores. E-mail address: A valid e-mail address. All e-mails from the system will be sent to this address. The e-mail address is not made public and will only be used if you wish to receive a new password or wish to receive certain news or notifications by e-mail.
{ "pile_set_name": "Pile-CC" }
Internet of things and Big Data as potential solutions to the problems in waste electrical and electronic equipment management: An exploratory study. Management of Waste Electrical and Electronic Equipment (WEEE) is a vital part in solid waste management, there are still some difficult issues require attentionss. This paper investigates the potential of applying Internet of Things (IoT) and Big Data as the solutions to the WEEE management problems. The massive data generated during the production, consumption and disposal of Electrical and Electronic Equipment (EEE) fits the characteristics of Big Data. Through using the state-of-the-art communication technologies, the IoT derives the WEEE "Big Data" from the life cycle of EEE, and the Big Data technologies process the WEEE "Big Data" for supporting decision making in WEEE management. The framework of implementing the IoT and the Big Data technologies is proposed, with its multiple layers are illustrated. Case studies with the potential application scenarios of the framework are presented and discussed. As an unprecedented exploration, the combined application of the IoT and the Big Data technologies in WEEE management brings a series of opportunities as well as new challenges. This study provides insights and visions for stakeholders in solving the WEEE management problems under the context of IoT and Big Data.
{ "pile_set_name": "PubMed Abstracts" }
Q: Lower bound for $\ln x$ using Lagrange's mean value theorem or Rolle's theorem I have to prove this inequality. $$ \ln x>\frac{2(x-1)}{x+1} \hspace{15pt}, \hspace{15pt}\text{where}\hspace{5pt}x>1 $$ using either Lagrange's mean value theorem or Rolle's theorem. Can someone give me a little hint? A: Define $$f(x):=\log x-2\frac{x-1}{x+1}=\log x-2\left(1-\frac{2}{x+1}\right)\,\,,\,x>1\Longrightarrow$$ $$ f'(x)=\frac{1}{x}-\frac{4}{(x+1)^2}> 0\Longleftrightarrow\frac{(x-1)^2}{x(x+1)^2}>0$$ and the last inequality is true for any $\,x>1\,$ , so $\,f\,$ is a non-decreasing function in $\,(1,\infty)\,$ , and thus $$\forall\,\,x>1\;\;,\;\;f(x)>f(1)=0$$ ... and without Lagrange or Rolle...! Added: Ok, with Lagrange's MVT (sigh!): Being $\,f(x)\,$ the same function as above, we get that for any $\,x>1\,$ there exists $\,c\in(1,x)\,$ s.t.: $$\frac{\log x-2\frac{x-1}{x+1}}{x-1}=\frac{f(x)-f(1)}{x-1}=f'(c)>0$$ the demonstration of the rightmost inquality being the same as in the first part, and since $\,x-1>0\,$ we get at once $\,f(x)-f(1)>0\,$ , which is the wanted inequality.
{ "pile_set_name": "StackExchange" }
Q: Angular /w ngrx - consecutive API calls I'm implementing ngrx in Angular 4 app. Code structure of redux related part is based on example app from ngrx repo (https://github.com/ngrx/example-app). Now I'm wondering how to implement something like that: I've got form for some kind of entity. On submit I send POST request to API with just a name of that entity. In response I get id of newly created entity. Immediately after that I want to send second request with the rest of form values and with id I've just got. Where and how should I put that second request? A: How to implement consecutive API calls depends on how cohesive the calls should be. What I mean by that is whether you view these two calls as a single 'transaction' where both requests have to succeed for you to successfully change your state. Obviously, if the first request fails, the second request can't be started because it depends on data from the first request. But... What should happen when the first request succeeds and the second request fails? Can your app continue it's work with only the id from the first request and without the second request, or will it end up in an inconsistent state? I am going to cover two scenarios: Scenario 1: When either of the requests fail you treat it as the whole 'transaction' has failed and therefore don't care which request has failed. Scenario 2: When request 1 fails, request 2 will not be executed. When request 2 fails, request 1 will still be viewed as successful. Scenario 1 Since both requests have to succeed you can view both requests as if they were only one request. In this case I suggest to hide the consecutive calls inside a service (this approach is not specific to ngrx/redux, it is just plain RxJs): @Injectable() export class PostService { private API_URL1 = 'http://your.api.com/resource1'; private API_URL2 = 'http://your.api.com/resource2'; constructor(private http: Http) { } postCombined(formValues: { name: string, age: number }): Observable<any> { return this.http.post(this.API_URL1, { name: formValues.name }) .map(res => res.json()) .switchMap(post1result => this.http.post(this.API_URL2, { /* access to post1result and formValues */ id: post1result.id, age: formValues.age, timestamp: new Date() }) .map(res => res.json()) .mergeMap(post2result => Observable.of({ /* access to post1result and post2result */ id: post1result.id, name: post1result.name, age: post2result.age, timestamp: post2result.timestamp }) ); } } Now you can use the postCombined-method in an effect like any other service-method as showcased in the ngrx-example-app. If either request fails, the service will throw an error which you can catch and handle in your effect. If both requests succeed, you will get back the data that is defined inside of mergeMap. As you can see, it is possible to return merged data from both request-responses. Scenario 2 With this approach you can distinguish the result of the two requests and react differently if either one fails. I suggest to break the two calls into independent actions so you can reduce each case independently. First, the service now has two independent methods (nothing special here): post.service.ts @Injectable() export class PostService { private API_URL1 = 'http://your.api.com/resource1'; private API_URL2 = 'http://your.api.com/resource2'; constructor(private http: Http) { } post1(formValues: { name: string }): Observable<{ id: number }> { return this.http.post(this.API_URL1, formValues).map(res => res.json()); } post2(receivedId: number, formValues: { age: number }): Observable<any> { return this.http.post(this.API_URL2, { id: receivedId, age: formValues.age, timestamp: new Date() }) .map(res => res.json()); } } Next define request-, success- and failure-actions for both requests: post.actions.ts import { Action } from '@ngrx/store'; export const POST1_REQUEST = 'POST1_REQUEST'; export const POST1_SUCCESS = 'POST1_SUCCESS'; export const POST1_FAILURE = 'POST1_FAILURE'; export const POST2_REQUEST = 'POST2_REQUEST'; export const POST2_SUCCESS = 'POST2_SUCCESS'; export const POST2_FAILURE = 'POST2_FAILURE'; export class Post1RequestAction implements Action { readonly type = POST1_REQUEST; constructor(public payload: { name: string, age: number }) { } } export class Post1SuccessAction implements Action { readonly type = POST1_SUCCESS; constructor(public payload: { id: number }) { } } export class Post1FailureAction implements Action { readonly type = POST1_FAILURE; constructor(public error: any) { } } export class Post2RequestAction implements Action { readonly type = POST2_REQUEST; constructor(public payload: { id: number, name: string, age: number}) { } } export class Post2SuccessAction implements Action { readonly type = POST2_SUCCESS; constructor(public payload: any) { } } export class Post2FailureAction implements Action { readonly type = POST2_FAILURE; constructor(public error: any) { } } export type Actions = Post1RequestAction | Post1SuccessAction | Post1FailureAction | Post2RequestAction | Post2SuccessAction | Post2FailureAction And now we can define two effects that will run when the request-actions are dispatched and in turn will dispatch either success- or failure-actions dependent on the outcome of the service-call: post.effects.ts import { PostService } from '../services/post.service'; import * as post from '../actions/post'; @Injectable() export class PostEffects { @Effect() post1$: Observable<Action> = this.actions$ .ofType(post.POST1_REQUEST) .map(toPayload) .switchMap(formValues => this.postService.post1(formValues) .mergeMap(post1Result => Observable.from([ /* * dispatch an action that signals that * the first request was successful */ new post.Post1SuccessAction(post1Result), /* * dispatch an action that triggers the second effect * as payload we deliver the id we received from the first call * and any other values the second request needs */ new post.Post2RequestAction({ id: post1Result.id, name: formValues.name, age: formValues.age }) ]) ) .catch(err => Observable.of(new post.Post1FailureAction(err))) ); @Effect() post2$: Observable<Action> = this.actions$ /* * this effect will only run if the first was successful * since it depends on the id being returned from the first request */ .ofType(post.POST2_REQUEST) .map(toPayload) .switchMap(formValuesAndId => this.postService.post2( /* we have access to the id of the first request */ formValuesAndId.id, /* the rest of the form values we need for the second request */ { age: formValuesAndId.age } ) .map(post2Result => new post.Post2SuccessAction(post2Result)) .catch(err => Observable.of(new post.Post2FailureAction(err))) ); constructor(private actions$: Actions, private postService: PostService) { } } Notice the mergeMap in combination with Observable.from([..]) in the first effect. It allows you to dispatch a Post1SuccessAction that can be reduced (by a reducer) as well as a Post2RequestAction that will trigger the second effect to run. In case the first request fails, the second request will not run, since the Post2RequestAction is not dispatched. As you can see, setting up actions and effects this way allows you to react to a failed request independently from the other request. To start the first request, all you have to do is dispatch a Post1RequestAction when you submit the form. Like this.store.dispatch(new post.Post1RequestAction({ name: 'Bob', age: 45 })) for example.
{ "pile_set_name": "StackExchange" }
Development of serum antibody to toxic shock toxin among individuals with toxic shock syndrome in Wisconsin. The presence of Staphylococcus aureus producing toxic shock toxin (TST) and the absence of antibody to TST (anti-TST) in acute-phase sera are markers for toxic shock syndrome (TSS). We used radioimmunoassay methods to examine 133 acute-phase and 277 convalescent-phase serum specimens from 181 patients with TSS for anti-TST. Among confirmed menstrual cases, nine (9.5%) of 95 patients had demonstrable anti-TST in acute-phase sera obtained during the first seven days of illness; patients with probable or non-menstrual TSS had a higher prevalence of anti-TST in acute-phase sera. Five (33.3%) of 15 individuals with confirmed menstrual TSS developed anti-TST as early as seven to nine days after TSS onset; 32 (62.7%) of 51 patients had demonstrable anti-TST in sera obtained more than one year after their episode of TSS. This study demonstrates a gradual rate and low magnitude of development of anti-TST after TSS and supports the diagnostic usefulness of measuring anti-TST levels in sera from patients suspected of having TSS.
{ "pile_set_name": "PubMed Abstracts" }
sing order. -40, -5, -2, 4 Sort 10, -16, 16 in ascending order. -16, 10, 16 Sort -0.3, 163, 14/3, -5. -5, -0.3, 14/3, 163 Sort -43, 5, 43, -0.3. -43, -0.3, 5, 43 Put 0.2, 12, -1.8 in descending order. 12, 0.2, -1.8 Put -26, 0, -2 in decreasing order. 0, -2, -26 Sort -1, 3, 5, 0. -1, 0, 3, 5 Put 5, 201, -6 in descending order. 201, 5, -6 Put -3, 5, 0, 1, 148 in ascending order. -3, 0, 1, 5, 148 Put -28, -1, -5, 5 in increasing order. -28, -5, -1, 5 Sort -9, 3, -4, -2 in descending order. 3, -2, -4, -9 Put -8, 6, 3, 27 in decreasing order. 27, 6, 3, -8 Put 7, -10, -3 in descending order. 7, -3, -10 Sort -36, 2/5, -5, 5 in increasing order. -36, -5, 2/5, 5 Sort -12, -1, 6, 0 in decreasing order. 6, 0, -1, -12 Sort 2, 1, 1/2, -3. -3, 1/2, 1, 2 Put 2, 1, 0, 4, 1042 in ascending order. 0, 1, 2, 4, 1042 Put -8, -4, -5 in descending order. -4, -5, -8 Put -15, 1, -5, 4 in decreasing order. 4, 1, -5, -15 Sort 5, -10, 71, 3, 1 in decreasing order. 71, 5, 3, 1, -10 Sort 5, 4, 0.2, -3 in ascending order. -3, 0.2, 4, 5 Sort 0.5, -1, -1/70, -0.4, -29 in increasing order. -29, -1, -0.4, -1/70, 0.5 Sort -5, -3, 0.62, 3. -5, -3, 0.62, 3 Sort 2, -0.9, 3, -4/7, -0.15 in ascending order. -0.9, -4/7, -0.15, 2, 3 Sort 0.03479, -2/7, -0.2 in decreasing order. 0.03479, -0.2, -2/7 Put -9, 8, -5, 0 in ascending order. -9, -5, 0, 8 Sort 4, -5, -4, -40. -40, -5, -4, 4 Put 4, 5, -162, -11 in descending order. 5, 4, -11, -162 Put -26, -3/7, -0.08, -0.4 in increasing order. -26, -3/7, -0.4, -0.08 Sort -4, 2, 3, -78 in descending order. 3, 2, -4, -78 Sort -5, -710, 4 in descending order. 4, -5, -710 Sort 10, 7, 4, -3 in decreasing order. 10, 7, 4, -3 Sort -2, 4, -66. -66, -2, 4 Sort 6, 0.5, 122/5 in decreasing order. 122/5, 6, 0.5 Sort -4, 20, -3/5, 1/6, 2 in descending order. 20, 2, 1/6, -3/5, -4 Sort -2, -228, 4, -3 in increasing order. -228, -3, -2, 4 Sort 0.66, -3, -0.5 in decreasing order. 0.66, -0.5, -3 Sort 2, -7, -5, 0, 3 in descending order. 3, 2, 0, -5, -7 Put 2, -9.6, -1, 1/4 in increasing order. -9.6, -1, 1/4, 2 Put -0.4, 51/5, 0.2 in decreasing order. 51/5, 0.2, -0.4 Sort -2, 3, 5, 14 in decreasing order. 14, 5, 3, -2 Put 3, -18, -2/3, 0.3 in ascending order. -18, -2/3, 0.3, 3 Sort 5, 1, 53, 0.1 in decreasing order. 53, 5, 1, 0.1 Sort 3, 0, -3, 230 in increasing order. -3, 0, 3, 230 Put -0.3, 20/209, -5 in increasing order. -5, -0.3, 20/209 Sort 5, -5, 14. -5, 5, 14 Put 4, 73, -4, 0 in descending order. 73, 4, 0, -4 Sort -2.207, 1/3, 2/3. -2.207, 1/3, 2/3 Put 0.4, -1/17, 5/6, 3/2, -4 in increasing order. -4, -1/17, 0.4, 5/6, 3/2 Sort 2/3, -0.2, 2, 4.3, -1 in ascending order. -1, -0.2, 2/3, 2, 4.3 Put 13/5, 1/7, -2, -2/5, 2/5 in descending order. 13/5, 2/5, 1/7, -2/5, -2 Sort -4, 2/3, -2/5, -1, -7 in increasing order. -7, -4, -1, -2/5, 2/3 Sort -3, 3, -5, -108 in ascending order. -108, -5, -3, 3 Put 6, -15, 5 in decreasing order. 6, 5, -15 Put 2, -4, -1, 67 in decreasing order. 67, 2, -1, -4 Sort 0.3, 0.004, 2/3, -1/5, -5. -5, -1/5, 0.004, 0.3, 2/3 Sort 1/7, -2, -15. -15, -2, 1/7 Put 5, 8, -5, -4 in ascending order. -5, -4, 5, 8 Sort -2, -12, -4, -1 in descending order. -1, -2, -4, -12 Sort 3, 73, -35 in increasing order. -35, 3, 73 Sort -12, -5, -6, 4 in ascending order. -12, -6, -5, 4 Sort -2/3, -0.1, 41, -1 in descending order. 41, -0.1, -2/3, -1 Put 5, -5, 34 in descending order. 34, 5, -5 Sort 1, -92, -9. -92, -9, 1 Sort 1, -233, -2/29 in increasing order. -233, -2/29, 1 Sort -0.1423, -0.3, 0.2, -7 in ascending order. -7, -0.3, -0.1423, 0.2 Put -2, 11.4, -5, 3 in decreasing order. 11.4, 3, -2, -5 Put -2, 3, 25, 5 in increasing order. -2, 3, 5, 25 Put -25, -3, 2 in descending order. 2, -3, -25 Sort -16, 2, -3, -1 in ascending order. -16, -3, -1, 2 Sort 0, 3, -14, 5, -2 in decreasing order. 5, 3, 0, -2, -14 Put 4, -1279, -1 in increasing order. -1279, -1, 4 Sort 70, 1, -4 in increasing order. -4, 1, 70 Sort -0.5, -4, 3/16, -0.2, 5 in decreasing order. 5, 3/16, -0.2, -0.5, -4 Sort -2, -64, 4, 29, 5. -64, -2, 4, 5, 29 Sort 4, 20, 2, -4 in increasing order. -4, 2, 4, 20 Sort 15, 30, 4 in decreasing order. 30, 15, 4 Put 1, 109, -4, -2 in descending order. 109, 1, -2, -4 Sort 3, -3, 0.69, -1. -3, -1, 0.69, 3 Put -6, -2/3, 116 in descending order. 116, -2/3, -6 Put -2, -24, -1/5, 5 in decreasing order. 5, -1/5, -2, -24 Sort 0, -5, 12 in decreasing order. 12, 0, -5 Sort 3, 0, 12, 2, -2 in ascending order. -2, 0, 2, 3, 12 Sort -2, 9, 5 in decreasing order. 9, 5, -2 Put -2, -8, 2, 4, -6 in ascending order. -8, -6, -2, 2, 4 Sort 0, -7, 2, 312, 5. -7, 0, 2, 5, 312 Sort -23, 2, -9 in decreasing order. 2, -9, -23 Sort 5, -5, -44 in decreasing order. 5, -5, -44 Sort 329, -1, 3 in descending order. 329, 3, -1 Sort 4, -35, -5, 5 in decreasing order. 5, 4, -5, -35 Sort -34, -6, 0 in descending order. 0, -6, -34 Sort 2/23, 95, 2/7. 2/23, 2/7, 95 Put 1, 9, 3, -5, 5 in descending order. 9, 5, 3, 1, -5 Sort 2, 7, 12, 4 in descending order. 12, 7, 4, 2 Put -3, -0.4, 0.4, -54 in decreasing order. 0.4, -0.4, -3, -54 Sort -3, -5, 416 in decreasing order. 416, -3, -5 Sort -3, 105, 2. -3, 2, 105 Put 51, -18, -1/9 in descending order. 51, -1/9, -18 Sort -5, -2, -1/15 in decreasing order. -1/15, -2, -5 Sort 1, -4, 68 in increasing order. -4, 1, 68 Put 1, -179, -4 in descending order. 1, -4, -179 Sort 0, 2, 17, -8. -8, 0, 2, 17 Put -1/6, 24, 0.31, -0.1 in ascending order. -1/6, -0.1, 0.31, 24 Sort -2, -3, 0 in increasing order. -3, -2, 0 Sort -4, -5, 5, 103. -5, -4, 5, 103 Sort 2, -2/7, -6/7, -6/5 in ascending order. -6/5, -6/7, -2/7, 2 Put -3, 6, 1, -18 in ascending order. -18, -3, 1, 6 Put 0.2, 6, -2, -1 in increasing order. -2, -1, 0.2, 6 Sort 210, -2, 2 in descending order. 210, 2, -2 Put -4, 26, -35 in descending order. 26, -4, -35 Sort -2, 5, 36, -4. -4, -2, 5, 36 Put 17, 0, -0.1, 70 in descending order. 70, 17, 0, -0.1 Put -5, -14, -1/3, 1 in increasing order. -14, -5, -1/3, 1 Sort 7, 2, -112 in decreasing order. 7, 2, -112 Sort 2/69, 4, 0.2 in ascending order. 2/69, 0.2, 4 Put 11, 15, -5, -3 in ascending order. -5, -3, 11, 15 Sort -2/7, -0.1, -1 in decreasing order. -0.1, -2/7, -1 Sort 3, -6, -4, 13. -6, -4, 3, 13 Sort 4, -5, -4, 25, -3 in descending order. 25, 4, -3, -4, -5 Sort -5/8, -2, 0. -2, -5/8, 0 Sort -1, 227, 12, -2 in descending order. 227, 12, -1, -2 Put 2/17, -3, 0, 1/4 in descending order. 1/4, 2/17, 0, -3 Sort -0.3, -279, 2/21. -279, -0.3, 2/21 Put -17, 5, 1 in ascending order. -17, 1, 5 Put 15, 3/8, -4, -2 in descending order. 15, 3/8, -2, -4 Sort 8, 3, 31, 4. 3, 4, 8, 31 Put -0.021, 1/6, 1/4 in increasing order. -0.021, 1/6, 1/4 Sort -0.1, -7/20, 1, -0.12 in decreasing order. 1, -0.1, -0.12, -7/20 Put 95, 1/5, -1 in ascending order. -1, 1/5, 95 Put 0.5, -2, -0.23, -1/4 in descending order. 0.5, -0.23, -1/4, -2 Put -1/5, -1023, -1 in descending order. -1/5, -1, -1023 Put 0, -1, 16 in decreasing order. 16, 0, -1 Sort -0.12, -3, -42 in increasing order. -42, -3, -0.12 Put -0.4, -0.2, 0.05, 1, 4 in increasing order. -0.4, -0.2, 0.05, 1, 4 Sort 0, -1.5, -2/145 in increasing order. -1.5, -2/145, 0 Sort -2/5, 0.4, 22/3, -3 in increasing order. -3, -2/5, 0.4, 22/3 Put 0, 1, 5, 4 in ascending order. 0, 1, 4, 5 Sort -5, 5, -2700 in decreasing order. 5, -5, -2700 Sort -3, 1601, -2, -1. -3, -2, -1, 1601 Sort -2, 3, 90. -2, 3, 90 Put -10, -4, 63 in decreasing order. 63, -4, -10 Put 0, 1, -3, -361 in decreasing order. 1, 0, -3, -361 Sort -0.2, 2, -558.7 in ascending order. -558.7, -0.2, 2 Put -4513, 2, -3 in descending order. 2, -3, -4513 Put 0, -5, -0.37, -0.5 in increasing order. -5, -0.5, -0.37, 0 Sort -5971, 0, 4. -5971, 0, 4 Put 633, 4, -5, 0 in decreasing order. 633, 4, 0, -5 Sort -0.3, -2/33, -1/6. -0.3, -1/6, -2/33 Put -24, -2, 4, -3 in decreasing order. 4, -2, -3, -24 Sort 2/7, 0.4, 1/23, 0 in ascending order. 0, 1/23, 2/7, 0.4 Sort -3, 774, 2, -2 in descending order. 774, 2, -2, -3 Put -2/307, -0.5, 9 in increasing order. -0.5, -2/307, 9 Sort 5, 3, -4, -3, -7 in increasing order. -7, -4, -3, 3, 5 Sort 53, -3, 4 in ascending order. -3, 4, 53 Sort 19, 3, -97 in increasing order. -97, 3, 19 Put 1, -2, -2/5, -4/9 in ascending order. -2, -4/9, -2/5, 1 Sort -5, 3, 0, -4. -5, -4, 0, 3 Sort 3, -12.7, 10, -2, 4/5. -12.7, -2, 4/5, 3, 10 Sort 1, -1, -21 in decreasing order. 1,
{ "pile_set_name": "DM Mathematics" }
Q: how to call data from activity to another without the button only display the data inside the database i want to call my CompanyDetail to display only the datas on another activity.. i want to hide the buttons from CompanytDetail to another activity when i display it. this is my code for CompanyDetail.. my second activity will display CompanyDetail without button as the user click on the listviewItem .. how can i code it? im new to android studio.. public class CompanyDetail extends Activity implements View.OnClickListener { Button cbtnsave, cbtndelete, cbtnclose; EditText etcomname, etcomstand, etcomrep; EditText etcomcont, etcommail; private int _Company_Id=0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_company_detail); cbtnsave = (Button) findViewById(R.id.cbtnSave); cbtndelete = (Button) findViewById(R.id.cbtnDelete); cbtnclose = (Button) findViewById(R.id.cbtnClose); etcomname = (EditText) findViewById(R.id.eTcomname); etcomstand = (EditText) findViewById(R.id.eTcomstand); etcomrep = (EditText) findViewById(R.id.eTcomrep); etcomcont = (EditText) findViewById(R.id.eTcomcont); etcommail = (EditText) findViewById(R.id.eTcommail); _Company_Id=0; Intent intentc = getIntent(); _Company_Id = intentc.getIntExtra("company_Id", 0); CompanyCrud ccrud= new CompanyCrud(this); Company company = new Company(); company = ccrud.getCompanyById(_Company_Id); etcomname.setText(company.cname); etcomstand.setText(company.cstanding); etcomrep.setText(company.crepres); etcomcont.setText(company.ccontact); etcommail.setText(company.cemail); } public void onClick(View view) { if (view == findViewById(R.id.cbtnSave)){ CompanyCrud ccrud = new CompanyCrud(this); Company company= new Company(); company.cname = etcomname.getText().toString(); company.cstanding = etcomstand.getText().toString(); company.crepres = etcomrep.getText().toString(); company.ccontact=etcomcont.getText().toString(); company.cemail=etcommail.getText().toString(); if (_Company_Id==0){ _Company_Id=ccrud.insert(company); Toast.makeText(this, "New Company Created", Toast.LENGTH_SHORT).show(); } else { ccrud.update(company); Toast.makeText(this, "Company Updated", Toast.LENGTH_SHORT).show(); } }else if (view == findViewById(R.id.cbtnDelete)){ CompanyCrud ccrud = new CompanyCrud(this); ccrud.delete(_Company_Id); Toast.makeText(this, "Company Deleted", Toast.LENGTH_SHORT).show(); finish(); }else if ( view == findViewById(R.id.cbtnClose)){ finish(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_company_detail, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } } now this is my code for Crud operation of CompanyDetail public class CompanyCrud { private DBHelper dbHelper; public CompanyCrud(Context context) { dbHelper = new DBHelper(context);} public int insert(Company company){ SQLiteDatabase db = dbHelper.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(Company.KEY_cname, company.cname); values.put(Company.KEY_cstanding, company.cstanding); values.put(Company.KEY_crepres, company.crepres); values.put(Company.KEY_ccontact, company.ccontact); values.put(Company.KEY_cemail, company.cemail); long company_Id = db.insert(Company.TABLE, null, values); db.close(); return (int) company_Id; } public void delete(int company_Id) { SQLiteDatabase db = dbHelper.getWritableDatabase(); db.delete(Company.TABLE,Company.KEY_ID + "=?", new String[]{ String.valueOf(company_Id) }); db.close(); } public void update(Company company) { SQLiteDatabase db = dbHelper.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(Company.KEY_cname, company.cname); values.put(Company.KEY_cstanding, company.cstanding); values.put(Company.KEY_crepres, company.crepres); values.put(Company.KEY_ccontact, company.ccontact); values.put(Company.KEY_cemail, company.cemail); db.update(Company.TABLE, values, Company.KEY_ID + "= ?", new String[]{String.valueOf(company.company_ID)}); db.close(); } public ArrayList<HashMap<String, String>> getCompanytList () { SQLiteDatabase db = dbHelper.getReadableDatabase(); String selectQuery = "SELECT " + Company.KEY_ID + "," + Company.KEY_cname + "," + Company.KEY_crepres + "," + Company.KEY_ccontact + "," + Company.KEY_cemail + " FROM " + Company.TABLE; ArrayList<HashMap<String, String>>companyList = new ArrayList<HashMap<String, String>>(); Cursor cursor= db.rawQuery(selectQuery, null); if (cursor.moveToFirst()){ do{ HashMap<String , String > company = new HashMap<String, String >(); company.put("cid", cursor.getString(cursor.getColumnIndex(Company.KEY_ID))); company.put("cname", cursor.getString(cursor.getColumnIndex(Company.KEY_cname))); companyList.add(company); } while (cursor.moveToNext()); }cursor.close(); db.close(); return companyList; } public Company getCompanyById(int cid) { SQLiteDatabase db = dbHelper.getReadableDatabase(); String selectQuery = "SELECT " + Company.KEY_ID + "," + Company.KEY_cname + "," + Company.KEY_crepres + "," + Company.KEY_ccontact + "," + Company.KEY_cemail + " FROM " + Company.TABLE + " WHERE " + Company.KEY_ID + "=?"; int ictrc =0; Company company = new Company(); Cursor cursor = db.rawQuery(selectQuery, new String[]{ String.valueOf(cid)}); if (cursor.moveToFirst()){ do { company.company_ID=cursor.getInt(cursor.getColumnIndex(Company.KEY_ID)); company.cstanding=cursor.getString(cursor.getColumnIndex(Company.KEY_cstanding)); company.crepres=cursor.getString(cursor.getColumnIndex(Company.KEY_crepres)); company.ccontact= cursor.getString(cursor.getColumnIndex(Company.KEY_ccontact)); company.cemail = cursor.getString(cursor.getColumnIndex(Company.KEY_cemail)); }while (cursor.moveToNext()); } cursor.close(); db.close(); return company; } } and this is the class file where i want to display the data and i have nothing on it cause i dont know what i am going to do.. i look for answer using bundle... putextras.. getextras.. i dnt know how to do it and where am i going to put it.. public class DisplayData extends ActionBarActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_display_data); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_display_data, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } } A: Uninstall and reinstall your app This is because, once your app is deployed in the emulator or testing device, and after that if you make any changes to the structure of the database, it wont be reflected and you get an error. when you reinstall it, the database is created as a new one and thus no error..!!! UPDATE @DerGolem mentioned a valuable comment above If you modified the database since it's original creation, then be sure you have an onUpgrade() method. And that you set the DATABASE_VERSION constant to a higher number. Thanks @DerGolem for pointing out that as I was really unaware about that fact..
{ "pile_set_name": "StackExchange" }