url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://esl.hohoweiya.xyz/07-Model-Assessment-and-Selection/7.5-Estimates-of-In-Sample-Prediction-Error/index.html
7.5 样本内误差的估计¶ weiya注 1. Efron, B. (1986). How biased is the apparent error rate of a prediction rule?, Journal of the American Statistical Association 81: 461–70.
2019-06-20 19:42:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7644143104553223, "perplexity": 1893.9570887048374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00213.warc.gz"}
http://math.stackexchange.com/questions/218748/symbol-for-wlog
# Symbol for WLOG Does anybody know a commonly used symbol for WLOG (without loss of generality)? I'm not comfortable with typing the whole thing every time and the abbreviation is just a compromise. If there is one for QED, why shouldn't there be one for WLOG? :) - Find what WLOG is in latin, and then abbreviate it? Maybe? If google traduction is right, it would be absque amissione generalitate which would become AAG... –  Jean-Sébastien Oct 22 '12 at 15:03 What’s wrong with WLOG? (Well, okay, if you write in German you should probably prefer o.B.d.A. or o.E.d.A, ohne Beschränkung/Einschränkung der Allgemeinheit. :-)) –  Brian M. Scott Oct 22 '12 at 15:05 I don't understand the downvote on this one. It is a perfect fine question with a clear answer. –  Thomas Oct 22 '12 at 15:14 You could try $\log_w$. But seriously, I dislike the use of WLOG (or what it stands for) as a somewhat careless way of writing, that puts an additional burden on the reader (because there is loss of something, namely freedom; for instance if you've used some symmetry to justify the WLOG, you can't use the same kind of reduction again) and if you are using it all the time, you might want to adopt a different style. –  Marc van Leeuwen Oct 22 '12 at 15:38 @Marc: It can be certainly misused, but eschewing it entirely puts a different additional burden on the reader: unnecessary clutter. –  Brian M. Scott Oct 22 '12 at 15:54 To answer your question: No there is no commonly used symbol for WLOG. - On the contrary, WLOG is a perfectly common symbol for WLOG ;) –  Arkamis Oct 22 '12 at 15:26 @Ed: With WOLOG stumbling woefully along behind it. :-) –  Brian M. Scott Oct 22 '12 at 15:28 All right, I guess I'll create my own symbol for that. –  Count Zero Oct 22 '12 at 16:02 Perhaps SLOG, for sans loss of generality –  Arkamis Oct 22 '12 at 16:21 @EdGorcenski: or KG for keeping generality. –  Thomas Oct 22 '12 at 17:07 In German, the equivalent phrases "ohne Beschränkung der Allgemeinheit" (oBdA) or "ohne Einschränkung" is sometimes denoted by Œ (O-E ligature): Œ $a<b \Rightarrow \ldots$ - Since there doesn't seem to be any such symbol, I decided to create one, by manipulating $\forall$ and $\exists$. Any thoughts? - No thoughts, only downvotes. Of course, if any site were going to introduce new math notation, shouldn't it be this one? –  The Chaz 2.0 Oct 22 '12 at 18:03 @TheChaz: I don't think that this site should attempt and try to introduce notational conventions. I also don't think that WLOG should have a symbol. I think that if you're still at the stage where you try to write a proof only by mathematical symbols then you are not ready to use WLOG in your proofs. (I didn't vote on this answer, by the way, neither up nor down). –  Asaf Karagila Oct 22 '12 at 18:06 I don't think that WLOG needs a new symbol, because I find that the phrase "without loss of generality" is almost always used in written text, not symbolic notation. The phrase itself means, more or less, that one may restrict one's attention to a specific (usually simpler) case without losing completeness or rigor in the presentation. If used symbolically, it is not necessarily clear that this is true. WLOG is also a dangerous phrase; its use can be as haphazard and incorrect as "clearly" and "obviously". –  Arkamis Oct 22 '12 at 18:15 What if you describe protocols or algorithms? I've seen several cases where WLOG is mixed with other math notation. I prefer to have either only symbols or things spelled out. Spiking it with SFLAs is not to my liking. Because I prefer to keep such things concise, I'd rather have a new symbol than WLOG. –  Count Zero Oct 22 '12 at 18:24 @TheChaz: Thanks for 'No thoughts, only downvotes'. –  Count Zero Oct 22 '12 at 18:26
2015-10-07 07:26:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7661241292953491, "perplexity": 1210.281251266922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682773.27/warc/CC-MAIN-20151001215802-00073-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.digi.com/resources/documentation/digidocs/embedded/dey/3.0/cc6plus/yocto_t_create-build-projects
## Create projects Before you build Digi Embedded Yocto, you need to create a platform-specific project. If you are using the Digi Embedded Yocto Docker container, the startup script offers to create a new project. If you already created one, go directly to Build images. You can also follow these steps to create a project manually. Use the mkproject.sh script to check supported platforms: ~$source /usr/local/dey-3.0/mkproject.sh -l To initialize the project and environment, use the mkproject.sh script. For example, for the ConnectCore 6 Plus SBC, do the following: ~$ mkdir -p ${HOME}/workspace/ccimx6qpsbc ~$ cd ${HOME}/workspace/ccimx6qpsbc ~$ source /usr/local/dey-3.0/mkproject.sh -p ccimx6qpsbc This initializes the project with a conf directory and two configuration files: • bblayers.conf: The available layers are configured here. • local.conf: Local configuration variables affecting only this project are customized here. The mkproject.sh script sets the environment for the build in the current running terminal. It also creates a dey-setup-environment script in the project’s root folder. This script can safely be rerun over existing projects to set up the build environment on a new terminal. If you close your current terminal and open a new one, you must run the dey-setup-environment script before you use Digi Embedded Yocto. ## Update existing projects When updating your installation of Digi Embedded Yocto, you need to erase the tmp and sstate-cache directories from existing projects and build them from scratch. Leaving the directories intact may result in problems in the build and the final images. ## Build images By default, ConnectCore 6 Plus Digi Embedded Yocto images include the XWayland desktop backend, but it’s possible to build framebuffer-based images. To do so, edit your project’s conf/local.conf file and add the following line: DISTRO_FEATURES_remove = "x11 wayland vulkan" This removes the XWayland window system, including all related packages such as pulseaudio and the XWayland gstreamer plugins. It also makes the rootfs image smaller and increases the amount of free memory in the system, making framebuffer-based images ideal for ConnectCore 6 Plus variants with smaller memory configurations. To build Digi Embedded Yocto images, use the command bitbake <image-recipe> from your project’s directory. For example: ~$bitbake dey-image-qt The compilation can take several hours on a powerful state-of-the-art workstation, depending on the selected image recipe. ## Inspect build deliverables You can find generated images inside your project’s directory, in the <project_folder>/tmp/deploy/images/<platform> folder. This directory contains the following files: • Boot image, with the boot.vfat file extension, which contains the Linux kernel, device trees and U-Boot scripts • Root file system images in the following formats: • rootfs.ext4, an ext4 partition image that can be programmed directly into the eMMC or SD card • rootfs.sdcard, an SD card image you can use to create a bootable SD card • rootfs.tar.bz2, a compressed root file system tarball that you can use to set up a remote NFS share to boot from • rootfs.manifest, a text file with the list of all the built packages • Recovery image, with the recovery.vfat file extension, which contains the recovery Linux kernel, device tree files, and U-Boot scripts • U-Boot images with the imx file extension • Linux kernel images with the bin file extension, which you can use to update an existing boot partition • Linux kernel device tree images with the dtb file extension, which you can use to update an existing boot partition ## Build a software update package To build a software update package, use the following command from your project’s directory: ~$ bitbake dey-image-qt-swu This will generate the update package under <project_folder>/tmp/deploy/images/<platform>: dey-image-qt-swu-<platform>-<timestamp>.swu To install the update package in your device, see Program firmware from Linux.
2023-01-31 16:26:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30985358357429504, "perplexity": 8795.227114825178}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00851.warc.gz"}
http://mathhelpforum.com/algebra/61190-quadratic-equation-print.html
• Nov 23rd 2008, 01:21 PM Larianne Hi, I don't think I'm getting the correct roots for this equation... 3x2(squared)- 2x- 6 = 0 I had to use the b +/- formula and I'm getting x= 1.79 or x= -1.12 That can't be right??? • Nov 23rd 2008, 01:30 PM euclid2 Quote: Originally Posted by Larianne Hi, I don't think I'm getting the correct roots for this equation... 3x2(squared)- 2x- 6 = 0 I had to use the b +/- formula and I'm getting x= 1.79 or x= -1.12 That can't be right??? $3x^2 - 2x - 6 = 0$ Use the quadratic equation $-b+/- square root of \frac{b^2 -4ac}{2a}$ Where a is 3, b is -2 and c is -6. • Nov 23rd 2008, 01:34 PM o_O Let's format that a little better: $x = \frac{b \pm \sqrt{b^2 - 4ac}}{2a}$ Why don't you think that's right? You rounded off to a few decimal places which will throw off your answer a little bit but it is correct (again, rounded of course) • Nov 23rd 2008, 01:35 PM Larianne Quote: Originally Posted by euclid2 $3x^2 - 2x - 6 = 0$ Use the quadratic equation $-b+/- square root of \frac{b^2 -4ac}{2a}$ Where a is 3, b is -2 and c is -6. Yes, I've done that but I put my answers back into the equation to check and I'm not getting zero. • Nov 23rd 2008, 01:43 PM euclid2 Quote: Originally Posted by Larianne Yes, I've done that but I put my answers back into the equation to check and I'm not getting zero. $x = \frac{2 \pm \sqrt{-2^2 - 4(3)(-6)}}{2(3)}$ $x = \frac{2 \pm \sqrt{76}}{6}$ $x = \frac{2 + \sqrt{76}}{6}$ + $x = \frac{2 - \sqrt{76}}{6}$ Therefore, $x =1.78 + x=-1.12$ • Nov 23rd 2008, 01:43 PM Larianne Quote: Originally Posted by o_O Let's format that a little better: $x = \frac{b \pm \sqrt{b^2 - 4ac}}{2a}$ Why don't you think that's right? You rounded off to a few decimal places which will throw off your answer a little bit but it is correct (again, rounded of course) Should it not work out evenly??? • Nov 23rd 2008, 01:44 PM Larianne Quote: Originally Posted by euclid2 $x = \frac{2 \pm \sqrt{-2^2 - 4(3)(-6)}}{2(3)}$ $x = \frac{2 \pm \sqrt{76}}{6}$ $x = \frac{2 + \sqrt{76}}{6}$ + $x = \frac{2 - \sqrt{76}}{6}$ Therefore, $x =1.78 and x=-1.11$
2017-05-24 01:44:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839634358882904, "perplexity": 2556.159098229158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607726.24/warc/CC-MAIN-20170524001106-20170524021106-00309.warc.gz"}
http://www.caterpillarproject.org/motivation/
Motivation Overview Despite a wealth of observational information on the properties of galaxies, a self-consistent, detailed theoretical model of galaxy formation does not yet exist. The evolution of galaxies has been examined in great detail, and in many wavelength bands, from approximately six hundred million years after the Big Bang to the present day. These observations show that the galaxies that we can see have undergone radical changes in size, appearance, and content over the last thirteen billion years. Complementary observations have provided a rich data set on the kinematics and elemental abundances of stars in our own Milky Way, including large numbers of metal-poor stars in the halo of our own galaxy and in local dwarf galaxies. In addition, a large amount of debris from tidally disrupted satellite galaxies, in the form of stellar streams, have been observed preferentially in the outer regions of the Galaxy. In principle, this “galactic fossil record” can probe the entire merger and star formation history of the Milky Way and its satellites, and complement direct observations at high redshift. It is important to note that the Milky Way is the only system (for now) for which we have direct access to this information. This is because only in our Galaxy and its nearby satellites can we measure the properties of millions of individual stars. Thus, a comprehensive understanding of how our own Galaxy formed and evolved represents a critical step forward towards understanding the process of galaxy formation in the general cosmological context. Inspired by the previous arguments, in the past decades several authors have compared observed properties of the Milky Way with those obtained from high resolution N-body simulations of the formation of Milky Way-like galaxies. However, recent studies seem to indicate that our Galaxy may not be a typical galaxy after all. For example, observations of a large sample of the Sloan Digital Sky Survey (SDSS) galaxies have shown that Milky Way has significantly more satellites than a typical galaxy of its luminosity (Busha et al., 2011; Liu et al., 2011). Results such as these cause us to pose the following important questions: To what extent can the Milky Way be regarded a template of galaxies of its type? How dependent are properties such as the chemical composition or the amount of substructure observed in the stellar halo on the particular formation history of our own Galaxy? To fully interpret currently available observations, as well as the large amount of data that soon will become available from surveys such as SkyMapper, LAMOST and LSST it is essential to understand whether or not the Milky Way is a typical galaxy of its kind. Scientific Motivation Under the current paradigm of structure formation (White & Rees, 1978), the accretion and merger of satellite galaxies is expected to play an important role in shaping and structuring the present day configuration of a host galaxy. Stellar halos of large galaxies such as our own are believed to be primarily formed as a result of the accumulation of tidal debris associated with ancient as well as recent and ongoing accretion events (Helmi, 2008). In principle, these galactic fossil records can probe the entire merger and star formation history of the Milky Way and its satellites (Freeman & Bland-Hawthorn, 2002). Information is not only encoded in the dynamical distribution of the different Galactic components, but also in the chemical abundances patterns of the individual stars. After decades where only a relatively small number of stars have been observed in detail, several surveys such as SEGUE (Yanny et al., 2009) and RAVE (Steinmetz et al., 2006), have collected photometric, astrometric, and spectroscopic information of vast samples of stars, primarily in the Galactic disk and stellar halo. The quantity and quality of observational data, which is already staggering, is going to increase exponentially over the next decade. Surveys such as Gaia (Perryman et al., 2001), LAMOST (Cui et al., 2012), The Dark Energy Survey (The Dark Energy Survey Collaboration, 2005), Gaia- ESO spectroscopic survey (Gilmore et al., 2012) and the Large Synoptic Survey Telescope (LSST, Ivezic et al. 2008) will provide measurements of positions, velocities and chemical composition of billions of stars that will strongly inform our understanding of galaxy formation. To unravel the formation history of the Milky Way requires the development of theoretical and statistical tools that can be used to contrast detailed numerical models with this wealth of observations. Recently, two groups have performed N-body simulations of the growth of Milky Way-type halos to examine structure formation and the hierarchical assembly of large galaxies in unprecedented detail – the Aquarius project of Springel et al. (2008) and the Via Lactea models of Diemand et al. (2007). These works have made it possible to quantify the substructure abundance as a function of subhalo mass and support the view that there is indeed a missing satellite problem. However, both the Aquarius and Via Lactea projects are limited in a number of respects. The Aquarius runs adopted cosmological parameters that are now observationally disfavored at a high level of significance by studies of the cosmic microwave background (Jarosik et al., 2011). It is problematic to extrapolate these results to the non-linear scales of individual galaxies to determine quantitatively the severity of the missing satellite problem. More seriously, the Aquarius project consists of a sample of only 6 well-resolved Milky Way-mass halos, while the Via Lactea study focused on only one such halo. It is apparent from these simulations alone that there is significant halo-to-halo scatter not only in the substructure abundance, but also in the observational properties of the host galaxies and its luminous satellites (e.g. Cooper et al., 2010; Lunnan et al., 2011). The method typically used to connect simulations with observational data is via semi-analytic modelling. This approach is exemplified in the work of Bullock & Johnston and collaborators (Bullock et al., 2000; Bullock & Johnston, 2005; Robertson et al., 2005; Johnston et al., 2008; Kazantzidis et al., 2008), who use the Extended Press-Schechter formalism to create a “merger tree” that follows the merger history of the Milky Way and its progenitor galaxies. This merger tree is used as the basis for a model for galaxy formation and evolution that includes prescriptions for star formation, gas evolution, radiation transport, and related physics. A more sophisticated and accurate implementation of this technique has been developed by Tumlinson (Tumlinson, 2006, 2010; Okrochkov & Tumlinson, 2010), based on highly-resolved N-body cosmological simulations rather than an analytic formalism for mergers. As a result, this method provides spatial and kinematic information for “live” halos, as well as a merger histories. The Tumlinson ChemTreeN model includes similar physics prescriptions to those used by Bullock & Johnston, but also follows the detailed chemical evolution of the stellar populations and gas residing in the dark matter halos, and allows the prediction of distributions of stellar populations and dwarf galaxies around the Milky Way. The major limitations of using these dark matter simulations coupled to semi-analytic models are three fold. Firstly, the large number of free parameters (at least 10) that describe gas physics and chemical evolution makes it difficult for such models to be truly predictive. The flip side to this drawback is that these models can explore the effects of uncertain physics much more effectively than multiphysics hydrodynamical simulations: the semi-analytic models are particularly good at identifying how observational quantities respond to changes in input physics (e.g. Gómez et al. 2012). This is their major advantage over fully self-consistent, extremely computationally expensive, hydrodynamical cosmological simulations. More importantly, most dark matter-only cosmological simula- tions used in these models have suffered from the limitation of finite computational power: calculations that were tractable typically had insufficient mass and spatial resolution to probe the entire mass function of all progenitor galaxies, as well as to resolve substructure in the resulting stellar halos, leftover of the multiple accretion events the simulated galaxy undergone. Critically, as previously stated, the biggest limitation is that all studies have only considered a handful of simulations which neglect dependencies in the properties of the resulting galaxies on their formation histories. To make meaningful advances in our understanding of the Milky Way’s formation history we must perform a large number of numerical simulations of Milky Way-like halos, densely sampling the full range of possible formation histories. This must also be coupled with our state-of-the-art semi-analytic model to make robust predictions about the luminous components of the Galaxy itself and its dwarf satellites. In order to statistically explore the the high-dimensional semi-analytic parameter space and its associated uncertainties, we have developed a set of statistical tools to identify how different parameters are tied to specific observables. As shown in Gómez et al. (2012), statistical emulators can rapidly give predictions of model outputs, and an attendant measure of its uncertainty, at any point in the parameter space. The numerical implementation of these emulators is very computationally efficient, making it feasible to predict vast numbers of model outputs in a short period of time. By defining a statistical measure of plausibility, and contrasting model emulators to mock observational data, we have demonstrated that it is possible to recover the input parameter vector used to create the mock observables (see below). Furthermore, we have performed a sensitivity analysis on ChemTreeN (via the emulators) to statistically characterize its input-output relationship (Gómez et al., 2014). This analysis allowed us not only to densely explore the input parameter space for a set that could best reproduce a given observable, but also to identify what parameters have the largest impact on the prediction of any given observable. It also allowed us to simplify our model by identifying input parameters that have no effect on the available set of observables. Area A: Statistically probing the merger history of the Milky Way The details of the formation of our own Galaxy remain a puzzle. Thanks to the latest generation of stellar surveys it is now possible to study in detail how the Milky Way has evolved to become the galaxy we currently observe. These surveys have provided, and will continue to provide, a rich dataset on the kinematics and chemical abundances of stars that can be directly compared with detailed numerical models. A robust and statistical comparison between models and these observations is essential. In Gómez et al. (2012, 2014) we developed the statistical tools that are required to meaningfully compare chemo-dynamical models of galaxy formation to the very large set of available observables. The results presented on these works indicated that, for a given fiducial observational dataset, the best-fitting input parameters selection strongly depends on the underlying merger history of the mock Milky Way-like galaxy. Interestingly, while best-fitting models were able to tightly reproduce the fiducial observables, only one of them successfully reproduced a second and independent set of observables. On the basis of this analysis it is possible to disregard certain models, and their corresponding merger histories, as good representations of the underlying merger history of the Milky Way. Nonetheless, a robust and statistically significant analysis requires the addition of a large set of possible merger histories. The suite of high-resolution dark matter-only simulations that we are proposing here will allow us to probe different galaxy merger histories, ranging from halos that acquire most of their mass very early on to halos that have had their last major merger episode close to z = 0. To create models of the Milky Way stellar halo and its satellite population, we will couple these simulations to an updated version of the semi-analytical model, ChemTreeN. A statistically best-fitting model for each merger history will be obtained by comparing the mock stellar halos to a fiducial observable dataset. The resulting best-fitting models will be confronted by independent observational datasets, including quantities such as mean halo metallicity and chemical abundances as a function of radius, radial distribution of satellite galaxies, and even the degree of phase-space substructure. Area B: The origin and evolution of Milky Way-like satellite galaxies The overarching questions we aim to address concern the nature of the “building blocks” of large galaxies, and to what extent dwarf galaxies play a role in the assembly of old stellar halos. This comes at a time when in particular the population of Milky Way ultra-faint dwarf galaxies (with $${\rm L_{tot} < 10^5 L_\odot}$$; discovered in the Sloan Digital Sky Survey; SDSS, Dark Energy Survey; DES) have been shown to be extremely metal-deficient systems that host ∼ 25% of the known most metal-poor stars. Moreover, they extend the metallicity-luminosity relationship of the classical dwarfs down to $${\rm L_{tot} ∼ 10^3 L_\odot}$$ (see Kirby et al., 2008, for more details). Future observations will reveal how far this relationship can be extended. Due to their simple nature, the ultra-faint systems are expected to retain signatures of the earliest stages of chemical enrichment in their stellar populations. The chemical abundances of individual stars in the faintest galaxies suggest a close connection to equivalent, extremely metal-poor halo stars in the Galaxy. Thus, there is a relation between the universe’s first galaxies, the building blocks of the Milky Way, and the surviving dwarf satellites (Frebel & Bromm, 2012). However, the assembly of the old Galactic stellar halo from the first galaxy to today can only globally be studied and understood with detailed simulations and in a cosmological context to conclusively learn about the role of dwarf galaxies in the hierarchical assembly processes of larger systems. To learn about the effects that influence the number of small subhalos over the course of the universe, we have already began to quantify the impact of reionization on the faintest halos using the six Aquarius DM halos (Griffen et al., 2010; Lunnan et al., 2011; Go ́mez et al., 2012; Griffen et al., 2013). Specifically, we tested the influence of different physically motivated reionization histories (Zahn et al., 2007; McQuinn et al., 2007) on the population of the smallest, resolved satellites (106 M⊙ halos) (see Figure 2). The effect is largest at the faint end (numbers of satellites vary by a factor of a few), while the brighter end (equivalent to the dSphs galaxies) are unaffected by different reionization histories. Properly accounting for these physical effects can, at least, partly resolve the so-called missing satellite problem (Moore et al., 1999). That is, the observed number of (luminous) Milky Way satellites appears to be significantly lower than the amount of dark matter substructure expected from the CDM theory. However, the overall halo-to- halo scatter in the total amount the dark matter substructure for a Milky Way-like galaxy is yet to be robustly characterized. To study the impact of, e.g., feedback from baryonic processes or heating by cosmic radiation in low-mass dark matter halos, using observations from the Local Universe, it is key to quantify the influence of cosmic variance. By quantifying the level and extent of variations of the substructure around Milky Way galaxies (see Figure 3), we are in a position to focus on a number of specific details about the origin and evolution of subhalos with different masses equivalent to those of a variety of observed dwarf galaxies, such as massive Magellanic Cloud-sized objects, classical dwarf Spheroidal galaxies, and even fainter systems. The distribution of the Milky Way classical dwarfs and the Magellanic Clouds as they orbit the Galaxy is presented in Figure 4. These topics can be addressed solely with dark matter simulations, but can also be used in combination with semi-analytic prescriptions, such as ChemTreeN, for populating dark halos with luminous matter. The latter requires detailed knowledge of many physical processes that govern the evolution of a luminous halo. We will first use these simplified empirical prescriptions to approximate these processes, but aim to later use results of new large-scale hydrodynamical ΛCDM simulations for the most physically motivated approach to light up our halos. This is an unparalleled opportunity to study the assembly of galaxy halos and to compare the results with the latest observations of dwarf galaxies, halo stars and stellar streams, both for the Milky Way and Andromeda. Area C: Is the Milky Way Unusual How good is the Milky Way in representing a typical, large spiral galaxy? It is often used as a reference point, especially when comparing its general properties with those derived from cosmological simulations of the formation of large galaxies. But recent works, observationally and theoretically, have shown that our Galaxy may at least have a few unusual features. Examples are the existence of the long discovered Magellanic Cloud satellites and its theoretically inferred accretion history. The Magellanic clous have recently garnered significant attention (Boylan-Kolchin et al., 2011a; Liu et al., 2010) in this respect. Observational analyses using Sloan Digital Sky Survey data of many other large spiral galaxies confirmed that galaxies like the Milky Way are very unlikely to have two companions as bright as the Magellanic Clouds. Indeed, less than 5 − 10% of galaxies host two such bright companions, and more than 80 percent host no such satellites at all. Previously, Boylan-Kolchin et al. (2011a) have examined this issue using the Millenium simulation of Springel et al. (2005a) and found that the Milky Way was unusual in hosting the Magellanic Clouds. The accretion history of the Milky Way has been recently inferred from the shape of its stellar halo’s density profile. While our Galaxy shows a clear break in this profile at ∼ 25 kpc, our closest similar galactic neighbour, M31, shows a smooth profile out to 100 kpc with no obvious break (Deason et al., 2013). Simulations of the formation of Milky Way-like stellar halos (Bullock & Johnston, 2005) suggest that differences in the shape of the density profile can be directly linked to the the assembly history of galaxies. However, the simulation suite currently used to explore this issue were not run in fully cosmological and self-consistent context and, more importantly, only consist of 11 realizations. With our large suite of DM simulations based one the latest observationally favored cosmology we can more precisely characterize how unusual our Galaxy is. We will examine the probability of Milky Way type halos having unusually massive satellites like the Large and Small Magellanic Clouds. Furthermore, we will be able to quantify the evolutionary differences between the Milky Way and Andromeda to establish whether either of these galaxies represents a “normal” galaxy. Statistically assessing these vastly different evolutionary paths will guide our understanding of how these histories influence the extent of surviving substructure and differences in properties such as the stellar mass, disk radius, and metal-deficient halo between the two sister galaxies. Area D: The Origin of the Old Stellar Halo of the Milky Way The Milky Way stellar halo contains a significant number of old, extremely metal-poor stars with [Fe/H] < −3.0 (corresponding to 1/1000 of the solar Fe abundance) that have been discovered and analyzed over the past two decades (see McWilliam et al., 1995; Beers & Christlieb, 2005; Frebel, 2010) for reviews). In their atmospheres, these low-mass stars retain the chemical composition of the gas cloud they formed from some 12-13 Gyr ago. Having a larger sample of these objects at hand thus allows one to study the early universe and to carry out near-field cosmology without the need for high-redshift observations. The stellar chemical abundance patterns furthermore reveal information about the enrichment sources and the local star forming environment. Over the past ∼ 3 years, surprisingly many extremely metal-poor stars have been uncovered in the least-luminous dwarf galaxies as well as some of the classical dSphs that orbit the Milky Way (Kirby et al., 2008; Frebel et al., 2010; Simon et al., 2010; Norris et al., 2010). Their nearly identical chemical abundances compared to equivalent metal-poor halo stars suggests that the Milky Way halo assembled from early analogs of these surviving systems. This sparked a lot of interest in the cosmological origin of the most metal-poor halo stars. Recently, Carollo et al. (2007, 2010) studied the origin of the old Galactic halo populations, finding a dual nature of the stellar halo, which is illustrated below. A large fraction of the most metal-poor halo stars appear to be kinematically associated with the “outer,” retrograde halo dominating beyond 10-15kpc. This component may be the result of a later time accretion event that resulted in the deposit of many of the halo’s metal- poor stars. But where did these star originate? The chemical abundances alone cannot provide conclusive answers. Kinematic information can only provide limited information on the progenitor systems that hosted these old stars prior to their destruction in the tidal field of the Milky Way. Hence, large-scale simulations are required to gain a more global understanding of the key features of halo assembly. Is the dual nature of the MW halo a generic feature? What is its exact origin? What role do small, faint satellites play in this process? Moreover, can metallicity gradients in a halo be explained this way? Or are such gradients, for example, dependent on a galaxy’s individual merger history and environment? And what is the metallicity distribution function of large galaxies? To obtain answers to these questions we will use ChemTreeN to follow the cosmic paths of the oldest, most metal-poor stars (star particles) while using them as tracers of the hierarchical assembly of the Milky Way and large galaxies in general. We will go beyond the Cooper et al. (2010) study (which used the six Aquarius DM halos) by applying physically motivated reionization histories, including gas physics effects, and using refined star formation prescriptions, coupled with observational constraints of the latest Galactic halo and dwarf galaxy stellar abundances. One such constraint will come from dwarf galaxy masses enclosed in the half-light radius (as a function of their half-light radius). Such masses are very robust observational measurements (Walker et al., 2010) as opposed to luminosities or the M300 mass (Strigari et al., 2008) at the faint end of the luminosity function. We will gradually include more and more feedback effects to study what else besides reionization influences the satellite survival rate. Feedback studies within the first-galaxy-environment, e.g., carried out by Wise & Abel (2008); Greif et al. (2011), will therefore provide valuable guidance. By tracing the cosmic merger history of individual subhalos with old stars, we can specifically study the analogs of today’s faintest galaxies. We will attempt to quantify the extent to which small satellites can be regarded as building blocks of large galaxies by comparing predictions for the number of accreted, old stars from ultra-faint-like systems with estimates for the actual numbers of low-metallicity stars in the Galactic halo. Radial distributions and kinematics of accreted stars will also be derived, which can be observationally tested over the next few years. Specifically, by quantifying the level and extent of cosmic variations between halo assembly histories, we are in a unique position to model the Milky Way and its halo much better than previously possible. Next to quantifying whether the Milky Way is an unusual galaxy, we can address how the metal-poor populations vary from galaxy to galaxy, and what role they played in their host’s respective assembly history. Take home message Our primary objective is to characterize the formation and evolution of Milky Way sized halos.
2017-11-25 07:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6213214993476868, "perplexity": 1172.6248728791895}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809695.97/warc/CC-MAIN-20171125071427-20171125091427-00020.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slope-and-intercept-of-x-y-5#583635
# How do you find the slope and intercept of x+y=-5? Mar 28, 2018 Slope = -1 y-intercept = -5 #### Explanation: $x + y = - 5$ If you rearrange it to standard form $y = m x + b$ (m being the slope and b being the y-intercept) $y = - x - 5$ $m = - 1$ $b = - 5$ Mar 28, 2018 See solution process below: #### Explanation: First, the equation needs to be rewritten into Slope-Intercept Form, which is $y = m x + b$, where "m" is the slope, and "b" is the y-intercept, by solving for y. $x + y = - 5$ $y = - 5 - x$ $y = - x + 5$ To find the slope and y-intercept of any equation in Slope-Intercept Form, your "m" value will represent slope, and your "b" value will represent the y-intercept. In this equation: $y = - x + 5$, there is no written number for the m value, but x is still technically being multiplied by 1, so "m", or slope, can be represented with "-1". Since there is a value for "b", or y-intercept, it can be represented with 5.
2022-01-26 09:36:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6776772141456604, "perplexity": 1138.3380407997379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00578.warc.gz"}
http://furthermathstutor.co.uk/fp3/vectors.html
# Vectors ## Introduction This topic introduces you to basic concepts in analytic geometry. Recall from FP1 and FP2 that a vector is a matrix with dimension $1\times n$ or $n\times 1$, i.e. it either has one column or one row. Conceptually a vector can be thought of as a straight line in space pointing in a direction. In FP3 we are concerned with vectors and planes in $\mathbb{R}^3$. The study of vectors in three-dimensional space has a wide variety of applications in physics and engineering, and forms a basis for the study of linear algebra in undergraduate mathematics. I'm going to start this section by going through some definitions and revision points. A set $V$ of elements, which we call vectors, is a vector space if in $V$ there are defined two algebraic operations called vector addition and scalar multiplication. For vector addition the following axioms must hold • Commutativity: $$\mathbf{a}+\mathbf{b} = \mathbf{b}+\mathbf{a}$$ • Associativity: $$\left(\mathbf{a}+\mathbf{b}\right)+\mathbf{c} = \mathbf{a}+\left(\mathbf{b}+\mathbf{c}\right)$$ • Existence of a unique zero vector: $$\mathbf{a}+\mathbf{0} = \mathbf{a}$$ • Existence of an additive inverse $-\mathbf{a}$ for each vector $\mathbf{a}$: $$\mathbf{a}+\left(-\mathbf{a}\right) = \mathbf{0}$$ For scalar multiplication the following axioms must hold • Distributivity: $$c\left(\mathbf{a}+\mathbf{b}\right) = c\mathbf{a}+c\mathbf{b}$$ $$\left(c+d\right)\mathbf{a} = c\mathbf{a}+d\mathbf{a}$$ • Associativity: $$c\left(d\mathbf{a}\right) = \left(cd\right)\mathbf{a}$$ • Identity: $$1\mathbf{a} = \mathbf{a}$$ Recall that multiple vectors are linearly independent if they are not scalar multiples of each other, or if they can't be formed by adding together scalar multiples of each other. They are linearly dependent if the reverse is true. The dot product of two vectors is a scalar quantity calculated as $$\mathbf{a}\cdot\mathbf{b} = \mathbf{a}^{\textrm{T}}\mathbf{b} = \left[\begin{array}{ccc} a_1 & \ldots & a_n \end{array}\right]\left[\begin{array}{c} b_1 \\ \vdots \\ b_n \end{array} \right] = a_1 b_1 + \ldots + a_n b_n$$ The dot product is linear, meaning $$\left(c\mathbf{a}+d\mathbf{b}\right)\cdot\mathbf{p} = c\left(\mathbf{a}\cdot\mathbf{p}\right)+d\left(\mathbf{b}\cdot\mathbf{p}\right)$$ and it is symmetric $$\mathbf{a}\cdot\mathbf{b} = \mathbf{b}\cdot\mathbf{a}$$ and finally the positive-definite property $$\mathbf{a}\cdot\mathbf{a} \ge 0$$ with equality if and only if the vector is $\mathbf{0}$ $$\mathbf{a}\cdot\mathbf{a} = 0 \iff \mathbf{a} = \mathbf{0}$$ Informally I like to think of the dot product of two vectors as their "in-common-ness". If two vectors have very similar direction then their dot product is larger. You will see in a second why this concept makes sense. Indeed the dot product of two orthogonal vectors (at 90 degrees to each other) is always zero $$\mathbf{a}\cdot \mathbf{b} = 0$$ The norm of a single vector is its length, which can be calculated using the dot product with itself $$\|\mathbf{a}\| = \sqrt{\mathbf{a}\cdot\mathbf{a}} = \sqrt{a_1^2 + \ldots + a_n^2}$$ which is always $\ge 0$, helping the positive-definite property above make more sense. For some vector $\mathbf{a}$ the unit vector in the same direction is $$\hat{\mathbf{a}} = \frac{\mathbf{a}}{\|\mathbf{a}\|}$$ Another way to calculate the dot product of two vectors uses the norms of both the vectors $$\mathbf{a}\cdot\mathbf{b} = \|\mathbf{a}\|\|\mathbf{b}\|\cos \theta$$ where $\theta$ is the angle between them. Note that by rearranging the above formula we can calculate this angle between the two vectors $$\cos \theta = \frac{\mathbf{a}\cdot\mathbf{b}}{\|\mathbf{a}\|\|\mathbf{b}\|} \Rightarrow \theta = \cos^{-1}\frac{\mathbf{a}\cdot\mathbf{b}}{\|\mathbf{a}\|\|\mathbf{b}\|}$$ While we're on the subject of vector norms, note the triangle inequality $$\|\mathbf{a}+\mathbf{b}\| \le \|\mathbf{a}\| + \|\mathbf{b}\|$$ To prove this, think of the following question. If you want to visit your grandmother, which way is the shortest? 1. Walk there in a straight line 2. Walk to the shop first and then to your grandmother's house ## The vector cross product The cross product of two linearly independent vectors is a vector perpendicular to both of them. The cross product of two vectors is the determinant of a $3\times 3$ matrix formed by the two vectors and the standard basis vectors $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ $$\mathbf{a} \times \mathbf{b} = \left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}\right|$$ From what you learned about determinants in FP2 you will see that if one of the vectors is a scalar multiple of the other, i.e. they are linearly dependent, then $$\mathbf{a} \times \mathbf{b} = \mathbf{0}$$ Scalar multiplication is conserved in the vector cross product $$\left(c\mathbf{a}\right)\times \mathbf{b} = c\left(\mathbf{a} \times \mathbf{b}\right) = \mathbf{a} \times \left(c\mathbf{b}\right)$$ The cross product is also anti-commutative. That means $$\mathbf{a} \times \mathbf{b} = -\left(\mathbf{b} \times \mathbf{a}\right)$$ It is also distributive but not associative $$\left(\mathbf{a} + \mathbf{b}\right)\times \mathbf{p} = \mathbf{a} \times \mathbf{p} + \mathbf{b} \times \mathbf{p}$$ $$\mathbf{p} \times\left(\mathbf{a} + \mathbf{b}\right) = \mathbf{p} \times \mathbf{a} + \mathbf{p} \times \mathbf{b}$$ Another formula for calculating the cross product is $$\|\mathbf{a}\|\|\mathbf{b}\|\sin(\theta)\hat{\mathbf{n}}$$ Where $\theta$ is the angle between the two vectors, and $\hat{\mathbf{n}}$ is the normal vector in the direction of the cross product. Therefore the length of the cross product is always $$\|\mathbf{a} \times \mathbf{b}\| = \|\mathbf{a}\|\|\mathbf{b}\|\sin\theta$$ This is what the cross product looks like for the standard basis vectors $\mathbf{i}$ and $\mathbf{j}$ See that $$\mathbf{i} \times \mathbf{j} = \left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right| = \mathbf{k}$$ and by the anti-commutativity property of the cross product $$\mathbf{j} \times \mathbf{i} = -\left( \mathbf{i} \times \mathbf{j} \right) = -\mathbf{k}$$ ## The intersection of two planes Since this topic is all about three-dimensional space, we look at planes as well as lines. A plane in three-dimensional $(x,y,z)$ space is a flat surface that can be expressed as $$f(x,y,z) = c$$ where $c$ is a constant value and the function on the left is linear with real coefficients. Picture it as an infinitely wide and long flat sheet. In FP3 you need to be able to work out if two planes intersect and if so, where. There are a few cases that can occur. Line intersection is where the two planes intersect along an infinitely long line like this They're parallel if they never meet once. They just carry on across the whole 3D space equidistant to each other They're coincident if they are the same plane. They meet at infinitely many points across the whole plane We can describe these situations mathematically. Suppose you have two planes expressed by the following two equations with constant coefficients $a_i$ and $b_i$ \begin{align} a_1 x+a_2 y + a_3 z &= c \\ b_1 x+b_2 y + b_3 z &= d \end{align} Then the normal vector to each plane is a vector that goes 90 degrees though the plane, consisting of the coefficients of $x$, $y$, and $z$ \begin{align} \mathbf{n}_1 &= a_1\mathbf{i}+a_2\mathbf{j} + a_3\mathbf{k} \\ \mathbf{n}_2 &= b_1\mathbf{i}+b_2\mathbf{j} + b_3\mathbf{k} \end{align} Strictly speaking the normal vector is equal to what we call $\nabla f$, a vector consisting of the partial derivatives of $f(x,y,z)$. So for some plane expressed as $f(x,y,z) = c$ its normal vector is $$\mathbf{n} = \nabla f = \left(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z} \right)$$ There is a line intersection if the coefficients in the normal vectors satisfy $$\frac{a_1}{b_1} \ne \frac{a_2}{b_2} \ne \frac{a_3}{b_3}$$ The planes are parallel and never meet if the coefficients satisfy $$\frac{a_1}{b_1} = \frac{a_2}{b_2} = \frac{a_3}{b_3} \ne \frac{c}{d}$$ However they are coincident if they satisfy $$\frac{a_1}{b_1} = \frac{a_2}{b_2} = \frac{a_3}{b_3} = \frac{c}{d}$$ Here is the plane expressed by $x+y+z=1$ and its normal vector $(1,1,1)$ If the normal vectors of two planes aren't parallel, then the two planes must meet. The cross product of these two normal vectors gives a vector which is perpendicular to both of them. Therefore the cross product is parallel to the line of intersection of the two planes. So this cross product will give a direction vector for the line of intersection. If the normal vectors of two planes are parallel however, then by the properties of the cross product (and originally of a matrix determinant) if the two vectors are linearly dependent then the cross product will be zero. If the constant $c$ is the same for both planes, they are coincident so there are infinitely many intersection points. If not, they are parallel and they never meet. So if the two planes aren't parallel and intersect along a line and the intersection direction vector has been found, we need to find the line equation for where they intersect. This is a position vector in the following form with the parameter $\lambda$ $$\mathbf{r}(\lambda) = \left(\textrm{starting position} \right) + \left(\textrm{intersect direction vector} \right)\lambda$$ The starting position can be any point shared by both the planes. This can be found by using simultaneous equations and picking a point. ### Example Q) Do the planes $x-2y+z = 1$ and $4x+y+z=4$ intersect? If so, state the line of points where they intersect. A) The normal vectors of the two planes are $(1,-2,1)$ and $(4,1,1)$ respectively. The two lines are not parallel so the two planes meet. Find the direction of the intersection line by finding the cross product of the two normal vectors. $$(1,-2,1) \times (4,1,1) = \left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ 1 & -2 & 1 \\ 4 & 1 & 1 \end{array}\right| = \mathbf{i}\left(-2-1\right) - \mathbf{j}\left(1-4\right) + \mathbf{k}\left(1+8\right) = -3\mathbf{i} + 3\mathbf{j} + 9\mathbf{k}$$ Now we need to find a point where the two planes intersect. The two equations are \begin{align} x-2y+z &= 1 \\ 4x+y+z &=4 \end{align} Subtracting the first equation from the second $$3x+3y=3 \Rightarrow x+y=1$$ Pick $x=1$ which means $y=0$. Then sub these values into the first equation $$1 - 0 + z = 1 \Rightarrow z = 0$$ So a point is $(1,0,0)$. Therefore the line of intersection with the parameter $\lambda$ is $$\mathbf{r}(\lambda) = \left(1,0,0\right) + \left(-3,3,9\right)\lambda = \left(1-3\lambda,3\lambda,9\lambda\right)$$ ## The intersection of lines in three dimensions As well as finding plane intersections, you need to be able to find the intersections of lines in three-dimensional space. Recall from the last section that lines are expressed as $$\mathbf{r}(\lambda) = \left(\textrm{starting position} \right) + \left(\textrm{direction vector} \right)\lambda$$ From the work above you'll be able to see that if two lines with directions $\mathbf{a}$ and $\mathbf{b}$ are parallel they never intersect and $$\mathbf{a} \times \mathbf{b} = \mathbf{0}$$ However sometimes lines aren't parallel but they also don't intersect. This is easy to picture if you think of two thin straight lines in space. Such lines are skew. Their directions are linearly independent and therefore their cross product is non-zero. If two vectors aren't parallel and do meet at some point then it is a simple matter of setting the lines equal to each other and solving for the parameter $\lambda$. We say that such lines are intersect. ### Example Q) Do the lines $\mathbf{r}_1 (\lambda) = (1+2\lambda,4\lambda,5-6\lambda)$ and $\mathbf{r}_2 (\lambda) = (\lambda,1+2\lambda,-3\lambda)$ intersect? A) No because their direction vectors $(2,4,-6)$ and $(1,2,-3)$ are linearly dependent (one is twice the other), therefore they are parallel. ### Example Q) Do the lines $\mathbf{r}_1 (\lambda) = (2+5\lambda,1-4\lambda,4+\lambda)$ and $\mathbf{r}_2 (\lambda) = (1+\lambda,1-\lambda,-2\lambda)$ intersect? A) No because although their direction vectors $(5,-4,1)$ and $(1,-1,-2)$ are linearly independent so they're not parallel, the three equations are inconsistent (they don't have a shared solution) \begin{align}2+5\lambda &= 1+\lambda \\ 1-4\lambda &= 1-\lambda \\ 4+\lambda &= -2\lambda \end{align} Therefore they are skew. ### Example Q) Do the lines $\mathbf{r}_1 (\lambda) = (1+\lambda,1-2\lambda,-2+9\lambda)$ and $\mathbf{r}_2 (\lambda) = (2-3\lambda,2\lambda,1-3\lambda)$ intersect? A) Yes because their direction vectors $(1,-2,9)$ and $(-3,2,-3)$ are linearly independent so they're not parallel and there is a consistent solution to the three equations \begin{align} 1+\lambda &= 2-3\lambda \\ 1-2\lambda &= 2\lambda \\ -2+9\lambda &= 1-3\lambda \end{align} The solution is $\lambda = \frac{1}{4}$. Therefore the intersection point is $(\frac{5}{4},\frac{1}{2},\frac{1}{4})$. ## Distance of a point from a line or from a plane Another application of the cross product is in finding the shortest distance from a point to either a line or a plane. ### Distance of a point from a line The distance from a point $B$ to a line $\mathbf{r}$ is the smallest distance from the point to one of the infinitely many points on the line. The vector from the original point $A$ of the line to the point $B$ is denoted $\overrightarrow{AB}$, and the direction vector of the line is called $\mathbf{d}$. The cross product of $\vec{AB}$ and $\mathbf{d}$ gives the normal vector with the correct direction. Then the distance we require is $$\frac{\|\overrightarrow{AB}\times\mathbf{d}\|}{\|\mathbf{d}\|}$$ This is the distance of the perpendicular line from the point to the line. ### Example Q) Calculate the shortest distance from the point $P(4,2,2)$ to the line $\mathbf{r}(\lambda) = (1+\lambda,\lambda,2\lambda-1)$. A) From the line expression, point $A$ is at $(1,0,-1)$. The direction vector $\mathbf{d}$ is $(1,1,2)$. Therefore we can find $\overrightarrow{AP}$ $$\overrightarrow{AP} = (4,2,2)-(1,0,-1) = (3,2,3)$$ Then calculate the cross product of $\overrightarrow{AP}$ and $\mathbf{d}$ $$\overrightarrow{AP}\times\mathbf{d}=(3,2,3)\times(1,1,2) = \left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ 3 & 2 & 3 \\ 1 & 1 & 2 \end{array}\right| = \mathbf{i}\left(4-3 \right) - \mathbf{j}\left(6-3\right) + \mathbf{k}\left(3-2\right) = \mathbf{i}-3\mathbf{j}+\mathbf{k}$$ The distance of the perpendicular line from the point to the line is therefore $$\frac{\|\overrightarrow{AP}\times\mathbf{d}\|}{\|\mathbf{d}\|} = \frac{\sqrt{1+\left(-3\right)^2+1}}{\sqrt{1+1+2^2}} = \frac{\sqrt{11}}{\sqrt{6}} = \frac{\sqrt{66}}{6}$$ ### Distance of a point from a plane Given a point $B$ at $(x_0,y_0,z_0)$ and a plane $f(x,y,z)=c$ with equation $$a_1 x + a_2 y + a_3 z = c$$ and normal line $\mathbf{n}$, the shortest distance between the point and the plane is $$\frac{\left|f(x_0,y_0,z_0)-c\right|}{\|\mathbf{n}\|}=\frac{\left|a_1 x_0+a_2 y_0 +a_3 z_0-c\right|}{\sqrt{a_1^2+a_2^2+a_3^2}}$$ ### Example Q) Calculate the shortest distance from the point $(-1,1,1)$ and the plane $x-5y+z=4$. A) $$\frac{\left|-1-5 +1-4\right|}{\sqrt{1+\left(-5\right)^2+1}} = \frac{9}{\sqrt{27}} = \sqrt{3}$$ ## The scalar triple product I'm going to show you what the scalar triple product means geometrically. Take the following calculation $$\left(2\mathbf{i}\times 2\mathbf{j}\right)\cdot 3\mathbf{k}$$ This is a scalar triple product that gives the volume of a cuboid with length 2, width 2, and height 3. First see that $$2\mathbf{i}\times 2\mathbf{j} = 4\mathbf{k}$$ This is the area of a rectangle of length 2 and width 2, but as a vector in the $\mathbf{k}$ direction. Then find the dot product of this and the height $3\mathbf{k}$ $$4\mathbf{k} \cdot 3\mathbf{k} = 12$$ giving the actual volume of the cuboid. We can actually cycle through the three vectors and the scalar triple product gives the same results $$\left(2\mathbf{i}\times 3\mathbf{j}\right)\cdot 2\mathbf{k} = 6\mathbf{k}\cdot 2\mathbf{k}=12$$ If you think about it this is the same cuboid but with the length, width, and height switched around. In general terms the scalar triple product of three vectors is a number, calculated as \begin{align} \left(\mathbf{a}\times \mathbf{b}\right)\cdot \mathbf{c} &= \left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}\right|\cdot \mathbf{c} \\[1em] &= \left(\left|\begin{array}{cc} a_2 & a_3 \\ b_2 & b_3 \end{array}\right|,-\left|\begin{array}{cc} a_1 & a_3 \\ b_1 & b_3 \end{array}\right|,\left|\begin{array}{cc} a_1 & a_2 \\ b_1 & b_2 \end{array}\right|\right)\cdot\mathbf{c} \\[1em] &= \left|\begin{array}{ccc} c_1 & c_2 & c_3 \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}\right| \end{align} The scalar triple product yields the same result when you cycle the vectors and operations \begin{align} \mathbf{a}\cdot\left(\mathbf{b}\times \mathbf{c}\right) &= \mathbf{a}\cdot\left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{array}\right| \\[1em] &= \mathbf{a}\cdot\left(\left|\begin{array}{cc} b_2 & b_3 \\ c_2 & c_3 \end{array}\right|,-\left|\begin{array}{cc} b_1 & b_3 \\ c_1 & c_3 \end{array}\right|,\left|\begin{array}{cc} b_1 & b_2 \\ c_1 & c_2 \end{array}\right|\right) \\[1em] &= \left|\begin{array}{ccc} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{array}\right| \end{align} This works because when you switch two rows of a matrix, you multiply the determinant by -1. But we've switched two rows twice, keeping the determinant the same. Notice that we've kept the rows representing the elements of $\mathbf{a}$, $\mathbf{b}$, and $\mathbf{c}$ in the same order to cycle them around. Also notice by the symmetry of the dot product $$\left(\mathbf{a}\times \mathbf{b}\right)\cdot \mathbf{c} = \mathbf{c}\cdot\left(\mathbf{a}\times \mathbf{b}\right)$$ $$\mathbf{a}\cdot\left(\mathbf{b}\times \mathbf{c}\right) = \left(\mathbf{b}\times \mathbf{c}\right)\cdot\mathbf{a}$$ ### Volume of a parallelepiped and a tetrahedron Geometrically, the absolute value of the scalar triple product represents the volume of a parallelepiped whose main edges are three vectors $\mathbf{a}$, $\mathbf{b}$, and $\mathbf{c}$ that meet in the same vertex. Given the angle between $\mathbf{a}\times \mathbf{b}$ and $\mathbf{c}$ is $\theta$, the volume is $$\|\mathbf{a}\times \mathbf{b}\|\|\mathbf{c}\|\cos\theta = \left(\mathbf{a}\times \mathbf{b}\right)\cdot\mathbf{c} = \left|\begin{array}{ccc} c_1 & c_2 & c_3 \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}\right|$$ ### Example Q) Find the volume of the parallelepiped with three main edges $(2,0,0)$, $(1,2,0)$, and $(1/2,1/2,2)$. A) Let $\mathbf{a} = (2,0,0)$, $\mathbf{b} = (1,2,0)$, and $\mathbf{c} = (1/2,1/2,2)$. Then the scalar triple product is $$\left(\mathbf{a}\times \mathbf{b}\right)\cdot \mathbf{c} = \left|\begin{array}{ccc} \frac{1}{2} & \frac{1}{2}& 2 \\ 2 & 0 & 0 \\ 1 & 2 & 0 \end{array}\right| = 2\left(4-0\right) = 8 ~\textrm{units}^3$$ The volume of a tetrahedron is equal to a sixth of the absolute value of the scalar triple product. Recall that a tetrahedron is a regular triangular pyramid composed of four equally sized equilateral triangles. Given the angle between $\mathbf{a}\times \mathbf{b}$ and $\mathbf{c}$ is $\theta$, the volume is $$\frac{1}{6}\|\mathbf{a}\times \mathbf{b}\|\|\mathbf{c}\|\cos\theta = \frac{1}{6}\left(\mathbf{a}\times \mathbf{b}\right)\cdot\mathbf{c} = \frac{1}{6}\left|\begin{array}{ccc} c_1 & c_2 & c_3 \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}\right|$$ ## The shortest distance between two skew lines If two straight lines are skew then there must be a point at which the distance between them is at a minimum. If two lines are defined as each having a starting point and a direction vector, then we'll call their starting points are $A$ and $B$ respectively, and their direction vectors $\mathbf{d}$ and $\mathbf{e}$ respectively. Then the shortest distance between the two skew lines is the absolute value of the scalar triple product of the direction vector $\overrightarrow{BA}$ and the unit vector in the direction of the cross product of $\mathbf{d}$ and $\mathbf{e}$. $$\left| \overrightarrow{BA}\cdot\frac{\left(\mathbf{d}\times\mathbf{e}\right)}{\left|\mathbf{d}\times\mathbf{e}\right|} \right|$$ ## Condition for two lines to intersect The formula for the minimum distance between two lines is $$\left| \overrightarrow{BA}\cdot\frac{\left(\mathbf{d}\times\mathbf{e}\right)}{\left|\mathbf{d}\times\mathbf{e}\right|} \right|$$ Therefore two lines must intersect if this minimum distance is equal to zero, i.e. $$\left| \overrightarrow{BA}\cdot\frac{\left(\mathbf{d}\times\mathbf{e}\right)}{\left|\mathbf{d}\times\mathbf{e}\right|} \right| = 0$$ By the definition of the dot product, if $\theta$ is the angle between the lines $$\|\overrightarrow{BA}\|\left\Vert\frac{\left(\mathbf{d}\times\mathbf{e}\right)}{\left|\mathbf{d}\times\mathbf{e}\right|}\right\Vert\cos\theta = 0$$ If the two non-parallel lines intersect in 3D space then they must be coplanar. If you imagine two thin straight lines in 3D space, then it makes sense that they both must lie on some hypothetical flat plane for them to intersect. Therefore the two vectors $\overrightarrow{BA}$ and $\left(\mathbf{d}\times\mathbf{e}\right)$ must be at right angles to each other. By the definition of the dot product that I wrote above, their dot product is therefore equal to 0 since $\cos\frac{\pi}{2}=0$. Additionally if the two lines have the same starting point then $\overrightarrow{BA} = \mathbf{0}$ which also zeroes the expression, since they intersect at the shared starting point. However if two lines are parallel and do intersect then they must necessarily share the same starting point to intersect. In this case they are coincident lines and the cross product of their direction vectors is zero. There are infinitely many intersection points along the lines. Otherwise they never intersect but the cross product of their direction vectors is still zero.
2018-05-27 13:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952530264854431, "perplexity": 150.74553520441853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00054.warc.gz"}
https://math.stackexchange.com/questions/3373041/find-all-n-in-mathbbz-such-that-3xnnx2-3-ge-nx2-for-all-x-in-math
# Find all $n\in\mathbb{Z^+}$ such that $3x^n+n(x+2)-3\ge nx^2$ for all $x\in\mathbb{R}$ Find all $$n\in\mathbb{Z^+}$$ such that $$3x^n+n(x+2)-3\ge nx^2$$ for all $$x\in\mathbb{R}$$ This question to me is tricky and I don't know where to start. I tried to substitute $$n$$ with multiple values like $$1$$ but I couldn't find a solution. For case $$n=1$$ I get that $$4x-1\ge x^2$$ which is not true for all $$x$$. The fact that $$x$$ can be any real value is troubling for me. I also tried to use induction to prove that some cases are incorrect but couldn't due to the fact that $$x$$ isn't a fixed value. Any help would be appreciated. • Consider the min. value of both sides. It may be a simple approach. – NoChance Sep 28 '19 at 12:34 The solutions to your problem are the even values of $$n$$. Let $$f_n(x)=3x^n+n(x+2)-3-nx^2$$. If $$n$$ is odd, then either $$n=1$$ in which case the leading monomial in $$f_n(x)$$ is $$-x^2$$, or $$n\geq 3$$ in which case the leading monomial in $$f_n(x)$$ is $$3x^n$$. In both cases, $$f(x)\to -\infty$$ when $$x\to -\infty$$ so $$f$$ cannot be nonnegative. So suppose now that $$n$$ is even. We will show that $$f_n(x) \geq 0$$. For $$n=2$$, $$f_2(x)=(x+1)^2 \geq 0$$. For $$n=4$$, $$f_4(x)=((x+1)^2)(3x^2-6x+5)\geq 0$$. So we may assume $$n\geq 6$$. If $$x\leq -1$$, we can write $$x=-1-y$$ where $$y\geq 0$$, and then $$\begin{array}{lcl} f_n(x) &=& f_n(-1-y) \\ &=& 3(1+y)^n +n(1-y)-3-n(1+y)^2 \\ &=& 3(1+y)^n -ny^2-3ny-3 \\ &=& 3(\sum_{j=0}^n \binom{n}{j}y^j)-ny^2-3ny-3 \\ &=& \frac{n}{2}(3(n-2)+1)y^2+3(\sum_{j=3}^n \binom{n}{j}y^j) \geq 0 \end{array}$$ If $$x\in [-1,2]$$, we have $$3(1+x^n) \geq 0 \geq n(x^2-x-2)$$ so $$f_n(x)\geq 0$$. Finally, suppose that $$x\geq 2$$. We can then write $$x=2+z$$ with $$z\geq 0$$, and then $$\begin{array}{lcl} f_n(x) &=& f_n(2+z) \\ &=& 3(2+z)^n +n(4+z)-3-n(2+z)^2 \\ &=& 3(2+z)^n -nz^2-3nz-3 \\ &=& 3(\sum_{j=0}^n \binom{n}{j}2^{n-j}y^j)-nz^2-3nz-3 \\ &=& 3(2^n-1)+3n(2^{n-1}-1)z+n((n-1)2^{n-3}-1)z^2+ 3(\sum_{j=3}^n \binom{n}{j}2^{n-j}y^j) \geq 0 \end{array}$$ This finishes the proof. • This is a good proof. Minor note: one can simplify things slightly if one notes that when $n$ is even $f'_n(x)$ is positive when $x \geq 0$ so one only needs to look at $[-1,1]$ in the second case and don't need to look at the last interval at all. – JoshuaZ Sep 28 '19 at 12:36
2021-07-24 08:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510189294815063, "perplexity": 53.312428974700026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00052.warc.gz"}
http://dataspace.princeton.edu/jspui/handle/88435/dsp01d504rn78g
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01d504rn78g Title: Witches' Brew and Bloodcraft An Examination of Inertial Behaviors In Laminar Flow Environments Authors: Pohlmann, John T. Advisors: Austin, Robert Contributors: Aizenman, Michael Department: Physics Class Year: 2016 Abstract: Deterministic Lateral Displacement arrays (DLDs) utilize the properties of laminar or viscous flow to separate biological material by characteristic size. DLDs are comprised of a long, at chip through which a solution is forced. A grid of laterally offset rows of pillars determines a flow pattern throughout the length of the chip. Due to laminar behavior, particles are expected to closely follow flow patterns, separating them based on their flow mode. As the size scale of DLDs decreases in an effort to boost the resolution of separation, experimentalists find that the efficiencies are decreasing: Polymer chains that otherwise traverse the length of the array unscathed break apart in the smaller scale, and particles do not separate as expected from laminar properties. Using low Reynolds and Stokes number approximations, this paper solves the laminar case in 2 1/2 dimensions for a scalar stream function. The scalar stream function allows us to translate boundary conditions from cartesian to polar, which allows us to de fine a critical radius and characteristic displacement vector to locate regions of inertial particle behavior in an otherwise completely laminar fluid. Finally, the scalar stream function and critical radius are applied to simulated flow environments to investigate if inertial behaviors may be occurring. We find clear patterns which are compliant with experimental observations, and make claims regarding theoretical models. Extent: 79 pages URI: http://arks.princeton.edu/ark:/88435/dsp01d504rn78g Type of Material: Princeton University Senior Theses Language: en_US Appears in Collections: Physics, 1936-2016 Files in This Item: File SizeFormat
2017-05-22 19:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46172627806663513, "perplexity": 3027.8367367488368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607046.17/warc/CC-MAIN-20170522190443-20170522210443-00100.warc.gz"}
https://imathworks.com/tex/tex-latex-numcases-environment-with-showlabels-package/
# [Tex/LaTex] numcases environment with showlabels package casesnumberingshowlabels I just discovered the environment numcases, which upgrades the package cases allowing to number the different cases. I am also currently using the showlabels package, which puts the name of the label on the PDF, right where you put it in the TeX file. Very useful feature, especially when writing a long document and you want to get the tag name from the PDF scrolling up a page rather than open another TeX file and search for the equation! Unfortunately, I can't get the numcases environment to work properly with the showlabels package. In particular, I cannot give a label to the last case, otherwise I get the error "Incompatible list can't be unboxed". However, without the showcases package, everything works smoothly. It also works smoothly if I don't label the last case, but of course that's not optimal, cause I need to label all the cases… Here's an example. Comment/uncomment the line where the showlabels package is included to see the behavior. \documentclass{article} \usepackage{showlabels} \usepackage{cases} \begin{document} \begin{numcases}{} a & b \label{a}\\ c & d \label{b} \end{numcases} \end{document} Does anybody know a workaround for this? I would be fine also with a way to avoid numcases (while getting the same result). I cannot understand why lines of a cases environment should be separately numbered. However, you get the effect with empheq: \documentclass{article} \usepackage{empheq} \usepackage{showlabels} \begin{document} \begin{empheq}[left=\empheqlbrace]{alignat=2}
2023-01-28 14:26:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952160120010376, "perplexity": 1493.5689737657399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00041.warc.gz"}
https://www.emathhelp.net/calculators/algebra-1/polynomial-long-division-calculator/?numer=5x%5E9+-+4x%5E2+%2B2&denom=5x%2B10
# Polynomial Long Division Calculator ## Perform the long division of polynomials step by step The calculator will perform the long division of polynomials, with steps shown. Related calculators: Synthetic Division Calculator, Long Division Calculator Divide (dividend): By (divisor): If the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below.
2022-10-05 02:49:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089750051498413, "perplexity": 1399.1822482803136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00456.warc.gz"}
https://www.techwhiff.com/learn/understanding-skin-biota-select-all-of-the/430709
# Understanding Skin Biota Select all of the statements that apply to the skin biota. Check All... ###### Question: Understanding Skin Biota Select all of the statements that apply to the skin biota. Check All That Apply 4% of the population carry Staphylococcus aureus on their skin. Bacteria were found colonizing hair follicles. The types of microbes found on the skin are very limited and they only represent 2 taxa. The natural defenses of the skin are effective as very few normal biota are found in this location #### Similar Solved Questions ##### (K), i.e. if H is in the normalizer of K, Question 2. Let G be a... (K), i.e. if H is in the normalizer of K, Question 2. Let G be a group, and H,K< G. Show, if H <Norm then HK is a subgroup of G, and K is a normal subgroup of HK.... ##### Chapter 11 eBook Car Payroll Accounts and Year-End Entries 1. EX 11.11 BLANKSHEETALGO 2. EX 11.01.... Chapter 11 eBook Car Payroll Accounts and Year-End Entries 1. EX 11.11 BLANKSHEETALGO 2. EX 11.01. ALGO 3. PR 11.05 ALGO 4. PE 11.04. BLANKSHEETALGO. 5. PR.1104. ALGO 6. EX 11 21 The following accounts, with the balances indicated, appear in the ledger of Garcon Co. on December of the current year: ... ##### 5) Gemini Plc issued 2,000 ordinary shares with a nominal value of 50p for £1.50 each... 5) Gemini Plc issued 2,000 ordinary shares with a nominal value of 50p for £1.50 each and then issued bonus shares, in respect of all its ordinary shares, on a 5 for 1 basis. Before the share issue and the bonus issue Gemini Plc's share capital and reserves were as follows: Ordinary share ... ##### 8. Draw detailed mechanisms for the following reactions that use "curved arrows" to show the breaking and forming o... 8. Draw detailed mechanisms for the following reactions that use "curved arrows" to show the breaking and forming of bonds and identify the mechanism that is operating. a HEC Br CH3 OH NaOCH H2SO + HOCH NHE + H₂O heat KOB HAPO + HOB + KCI •H₂O THE heat 9. Provide the product... ##### 8. + -12.27 points KatzPSEf1 24.P037 My Notes Ask Your Teacher A uniformly charged conducting rod... 8. + -12.27 points KatzPSEf1 24.P037 My Notes Ask Your Teacher A uniformly charged conducting rod of length 31.0 cm and charge per unit length λ =-2.90 x 10-5 C/m is placed horizontally at the origin (see figure below). What is the electric field at point A with coordinates (0, 0.440 m)? (Exp... ##### 1/3 Assigneu: 202.01.10 Due: 2020.01.2 w 1. The total load on the beam is W lbs,... 1/3 Assigneu: 202.01.10 Due: 2020.01.2 w 1. The total load on the beam is W lbs, distributed triangularly. (a) Find an equation for the distributed loading, w(x), in terms of W and L. (b) Integrate the equilibrium equation Elv(x)" = w(x), and use the appropriate boundary conditions to obtain equ... ##### 2. Draw the major product(s) formed when the optically active compound shown below is treated with... 2. Draw the major product(s) formed when the optically active compound shown below is treated with H2O/H2SO4. Is the product optically active? Why or why not? (4 pts) optically active... Splish Brothers Inc. compiled the following financial information as of December 31, 2022: Service revenue Common stock Equipment Operating expenses Cash Dividends Supplies Accounts payable Accounts receivable Retained earnings, 1/1/22 $834000 176000 255000 733000 209000 62000 26000 119000 85000 441... 1 answer ##### Mism for the following substitution reactions a mechanism I1. Propose a determine if they are SnI... mism for the following substitution reactions a mechanism I1. Propose a determine if they are SnI or Sv2. N, DMSO (solvent) H.s SH +TSOH KOCH d. NaOEt e. C/ SH... 1 answer ##### QUESTION 8 1 poir The probability density function of the time it takes a hematology cell... QUESTION 8 1 poir The probability density function of the time it takes a hematology cell counter to complete a test on a blood sample is f(x)=0.04 for 57<x<82 seconds. Round your answers to 2 decimal places. (a) What proportion of tests require more than 70 seconds to complete? (b) What propo... 1 answer ##### 8. Christine's pet shop buys cat litter for$15.00 less 20% per bag. The store's overhead... 8. Christine's pet shop buys cat litter for \$15.00 less 20% per bag. The store's overhead is 45% of cost and the owner requires a profit of 20 % of cost, a) For how much should a bag be sold? b) Calculate the amount of markup.... ##### 30 cm Single point of contact 37 3.35 cm If a force of magnitude 180 N... 30 cm Single point of contact 37 3.35 cm If a force of magnitude 180 N is exerted horizontally as shown, find the force exerted by the hammer claws on the nail. (Assume that the force the hammer exerts on the nail is parallel to the nail). Answer in units of N 005 (part 2 of 2) 10.0 points Find the ... ##### A population of insects currently numbers 22,100 and is increasing at a rate of R(t) = 1215e0.14t... A population of insects currently numbers 22,100 and is increasing at a rate of R(t) = 1215e0.14t insects/week. If the survival function for the insects is S(t) = e−0.2t, where t is measured in weeks, how many insects are there after 12 weeks?(Round your answer to the nearest whole number.)...
2022-06-28 06:21:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5023166537284851, "perplexity": 4422.927901046079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00313.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=135&t=5211
## Finding Vapor Pressure of Liquid (HW #10.109) $\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$ $\Delta G^{\circ}= -RT\ln K$ $\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$ Glenda Marshall DIS 3M Posts: 68 Joined: Sat Sep 27, 2014 3:00 am ### Finding Vapor Pressure of Liquid (HW #10.109) What does it mean to find vapor pressure of a liquid? In problem #10.109 in the homework, part (b) asks "what is the vapor pressure of liquid bromine?" and it does not provide an equation or any other information. It the solutions manual, they solved for K and used that value as the vapor pressure. I don't understand why this is possible. Also part c asks "what is the partial pressure of Br2(g) above the liquid in a bottle of bromine at 25 degrees celsius?" I don't know how this is any different from part (b). Any help would be appreciated! Thanks! Chem_Mod Posts: 18021 Joined: Thu Aug 04, 2011 1:53 pm Has upvoted: 418 times ### Re: Finding Vapor Pressure of Liquid (HW #10.109) The vapor pressure is the pressure of a gas that is in equilibrium with its liquid. So if we write the equilibrium expression for Br2 (l) --> Br2 (g), it is just K = [Br2 (g)] because liquids and solids are not included in equilibrium expressions. Therefore, K is exactly the vapor pressure that is asked for. For part c), the problem wants the partial pressure of monatomic bromine. So you'll use the answer to part a) and solve for [Br (g)] after taking the square root of [Br (g)]^2 Justin Le 2I Posts: 142 Joined: Fri Sep 26, 2014 2:02 pm ### Re: Finding Vapor Pressure of Liquid (HW #10.109) The solutions manual makes this a little confusing because it says PBr(g)^2 but it should actually be without the exponent because we are just solving for the pressure of Br(g)
2020-01-27 19:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7660911083221436, "perplexity": 1187.9127200141504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00089.warc.gz"}
https://www.biostars.org/p/80140/
Phylogeny Based On Genome Rearrangement Pattern? 0 0 Entering edit mode 8.7 years ago qiyunzhu ▴ 430 Hello all, I am trying to resolve the phylogeny of a bacterial group. Trees infered using conventional approaches based on sequence data are conflicting. Therefore I'm think about using genome rearrangement data to build trees. I first used Mauve to align the genomes and identify the homologous genomic blocks. Then I export the data (permutation matrix) and tried to use other programs to build tree based on it. I'm not interested in rearrangement history, but just focus on species tree. So far I tried MGR, MGRA and BADGER. MGR runs, but veeeeeeerrrry slowly. The latter two programs just don't work for my case. Therefore I am here asking if anyone happens to know some better solutions. Thank you for your time reading this! phylogenetics genomics • 2.2k views 0 Entering edit mode can you try coding the rearrangements into 1/0 characters, old school cladistics uses this for all kinds of morphological characters. 0 Entering edit mode I don't know if there is a way to code it in binary data, or I should have already solved it using RAxML... 0 Entering edit mode It's maybe a little off-base, but there was a paper at ISMB last month which did something similar, but using FISH copy number data, and reconstructing human cancer phylogenies. More importantly, they built quite a specialised and highly efficient piece of software to do just that. I wonder if your problem might not be similar enough that you could adapt their method? http://bioinformatics.oxfordjournals.org/content/29/13/i189.full 0 Entering edit mode Thanks for your suggestion! I browsed the program FISHtrees. Unfortunately it does not seem to be the type we are looking for. Genome rearrangement data is not alignable binary or multi-state data. It's something like: P1 +a -c -b $+d +e +f$ P2 +d +e +b +c $+a +f$ P3 +a -d $-c -b +e -f$ ... 0 Entering edit mode It looks to me like that would convert quite well to a binary matrix, with columns for genome regions, rows for species (or individuals), and each entry containing {1,0} to denote presence or absence of that region? 0 Entering edit mode I'm afraid that isn't the case. It is the order of genes that matters, instead of the presence / absence of genes in each genomic loci. 0 Entering edit mode Here's a paper where they have coded gene order, presence/absence as a matrix for baculoviruses. It sounds like what you are looking for: http://www.ncbi.nlm.nih.gov/pubmed/11483757 0 Entering edit mode Yes it is! Thanks for recommending this! I later found a couple of related articles, including the latest ones, such as: www.ncbi.nlm.nih.gov/pubmed/23424133
2022-05-17 12:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33117055892944336, "perplexity": 2213.3459716408843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00146.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/432/C2xC3%5E2s4S4.html
Copied to clipboard ## G = C2×C32⋊4S4order 432 = 24·33 ### Direct product of C2 and C32⋊4S4 Series: Derived Chief Lower central Upper central Derived series C1 — C22 — C32×A4 — C2×C32⋊4S4 Chief series C1 — C22 — C2×C6 — C62 — C32×A4 — C32⋊4S4 — C2×C32⋊4S4 Lower central C32×A4 — C2×C32⋊4S4 Upper central C1 — C2 Generators and relations for C2×C324S4 G = < a,b,c,d,e | a2=b6=c6=d3=e2=1, ab=ba, ac=ca, ad=da, ae=ea, bc=cb, dbd-1=b4c3, ebe=b2c3, dcd-1=b3c, ece=b3c2, ede=d-1 > Subgroups: 3340 in 358 conjugacy classes, 71 normal (11 characteristic) C1, C2, C2, C3, C3, C4, C22, C22, S3, C6, C6, C2×C4, D4, C23, C23, C32, C32, Dic3, A4, D6, C2×C6, C2×C6, C2×D4, C3⋊S3, C3×C6, C3×C6, C2×Dic3, C3⋊D4, S4, C2×A4, C22×S3, C22×C6, C33, C3⋊Dic3, C3×A4, C2×C3⋊S3, C62, C62, C2×C3⋊D4, C2×S4, C33⋊C2, C32×C6, C2×C3⋊Dic3, C327D4, C3⋊S4, C6×A4, C22×C3⋊S3, C2×C62, C32×A4, C2×C33⋊C2, C2×C327D4, C2×C3⋊S4, C324S4, A4×C3×C6, C2×C324S4 Quotients: C1, C2, C22, S3, D6, C3⋊S3, S4, C2×C3⋊S3, C2×S4, C33⋊C2, C3⋊S4, C2×C33⋊C2, C2×C3⋊S4, C324S4, C2×C324S4 Smallest permutation representation of C2×C324S4 On 54 points Generators in S54 (1 7)(2 8)(3 9)(4 11)(5 12)(6 10)(13 17)(14 18)(15 16)(19 22)(20 23)(21 24)(25 28)(26 29)(27 30)(31 34)(32 35)(33 36)(37 40)(38 41)(39 42)(43 46)(44 47)(45 48)(49 52)(50 53)(51 54) (1 2 3)(4 5 6)(7 8 9)(10 11 12)(13 14 15)(16 17 18)(19 20 21 22 23 24)(25 26 27 28 29 30)(31 32 33 34 35 36)(37 38 39 40 41 42)(43 44 45 46 47 48)(49 50 51 52 53 54) (1 15 4 7 16 11)(2 13 5 8 17 12)(3 14 6 9 18 10)(19 26 44)(20 27 45)(21 28 46)(22 29 47)(23 30 48)(24 25 43)(31 53 39 34 50 42)(32 54 40 35 51 37)(33 49 41 36 52 38) (1 40 21)(2 38 19)(3 42 23)(4 51 46)(5 49 44)(6 53 48)(7 37 24)(8 41 22)(9 39 20)(10 50 45)(11 54 43)(12 52 47)(13 33 29)(14 31 27)(15 35 25)(16 32 28)(17 36 26)(18 34 30) (1 47)(2 43)(3 45)(4 22)(5 24)(6 20)(7 44)(8 46)(9 48)(10 23)(11 19)(12 21)(13 28)(14 30)(15 26)(16 29)(17 25)(18 27)(31 34)(32 33)(35 36)(37 49)(38 54)(39 53)(40 52)(41 51)(42 50) G:=sub<Sym(54)| (1,7)(2,8)(3,9)(4,11)(5,12)(6,10)(13,17)(14,18)(15,16)(19,22)(20,23)(21,24)(25,28)(26,29)(27,30)(31,34)(32,35)(33,36)(37,40)(38,41)(39,42)(43,46)(44,47)(45,48)(49,52)(50,53)(51,54), (1,2,3)(4,5,6)(7,8,9)(10,11,12)(13,14,15)(16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54), (1,15,4,7,16,11)(2,13,5,8,17,12)(3,14,6,9,18,10)(19,26,44)(20,27,45)(21,28,46)(22,29,47)(23,30,48)(24,25,43)(31,53,39,34,50,42)(32,54,40,35,51,37)(33,49,41,36,52,38), (1,40,21)(2,38,19)(3,42,23)(4,51,46)(5,49,44)(6,53,48)(7,37,24)(8,41,22)(9,39,20)(10,50,45)(11,54,43)(12,52,47)(13,33,29)(14,31,27)(15,35,25)(16,32,28)(17,36,26)(18,34,30), (1,47)(2,43)(3,45)(4,22)(5,24)(6,20)(7,44)(8,46)(9,48)(10,23)(11,19)(12,21)(13,28)(14,30)(15,26)(16,29)(17,25)(18,27)(31,34)(32,33)(35,36)(37,49)(38,54)(39,53)(40,52)(41,51)(42,50)>; G:=Group( (1,7)(2,8)(3,9)(4,11)(5,12)(6,10)(13,17)(14,18)(15,16)(19,22)(20,23)(21,24)(25,28)(26,29)(27,30)(31,34)(32,35)(33,36)(37,40)(38,41)(39,42)(43,46)(44,47)(45,48)(49,52)(50,53)(51,54), (1,2,3)(4,5,6)(7,8,9)(10,11,12)(13,14,15)(16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54), (1,15,4,7,16,11)(2,13,5,8,17,12)(3,14,6,9,18,10)(19,26,44)(20,27,45)(21,28,46)(22,29,47)(23,30,48)(24,25,43)(31,53,39,34,50,42)(32,54,40,35,51,37)(33,49,41,36,52,38), (1,40,21)(2,38,19)(3,42,23)(4,51,46)(5,49,44)(6,53,48)(7,37,24)(8,41,22)(9,39,20)(10,50,45)(11,54,43)(12,52,47)(13,33,29)(14,31,27)(15,35,25)(16,32,28)(17,36,26)(18,34,30), (1,47)(2,43)(3,45)(4,22)(5,24)(6,20)(7,44)(8,46)(9,48)(10,23)(11,19)(12,21)(13,28)(14,30)(15,26)(16,29)(17,25)(18,27)(31,34)(32,33)(35,36)(37,49)(38,54)(39,53)(40,52)(41,51)(42,50) ); G=PermutationGroup([[(1,7),(2,8),(3,9),(4,11),(5,12),(6,10),(13,17),(14,18),(15,16),(19,22),(20,23),(21,24),(25,28),(26,29),(27,30),(31,34),(32,35),(33,36),(37,40),(38,41),(39,42),(43,46),(44,47),(45,48),(49,52),(50,53),(51,54)], [(1,2,3),(4,5,6),(7,8,9),(10,11,12),(13,14,15),(16,17,18),(19,20,21,22,23,24),(25,26,27,28,29,30),(31,32,33,34,35,36),(37,38,39,40,41,42),(43,44,45,46,47,48),(49,50,51,52,53,54)], [(1,15,4,7,16,11),(2,13,5,8,17,12),(3,14,6,9,18,10),(19,26,44),(20,27,45),(21,28,46),(22,29,47),(23,30,48),(24,25,43),(31,53,39,34,50,42),(32,54,40,35,51,37),(33,49,41,36,52,38)], [(1,40,21),(2,38,19),(3,42,23),(4,51,46),(5,49,44),(6,53,48),(7,37,24),(8,41,22),(9,39,20),(10,50,45),(11,54,43),(12,52,47),(13,33,29),(14,31,27),(15,35,25),(16,32,28),(17,36,26),(18,34,30)], [(1,47),(2,43),(3,45),(4,22),(5,24),(6,20),(7,44),(8,46),(9,48),(10,23),(11,19),(12,21),(13,28),(14,30),(15,26),(16,29),(17,25),(18,27),(31,34),(32,33),(35,36),(37,49),(38,54),(39,53),(40,52),(41,51),(42,50)]]) 42 conjugacy classes class 1 2A 2B 2C 2D 2E 3A 3B 3C 3D 3E ··· 3M 4A 4B 6A 6B 6C 6D 6E ··· 6L 6M ··· 6U order 1 2 2 2 2 2 3 3 3 3 3 ··· 3 4 4 6 6 6 6 6 ··· 6 6 ··· 6 size 1 1 3 3 54 54 2 2 2 2 8 ··· 8 54 54 2 2 2 2 6 ··· 6 8 ··· 8 42 irreducible representations dim 1 1 1 2 2 2 2 3 3 6 6 type + + + + + + + + + + + image C1 C2 C2 S3 S3 D6 D6 S4 C2×S4 C3⋊S4 C2×C3⋊S4 kernel C2×C32⋊4S4 C32⋊4S4 A4×C3×C6 C6×A4 C2×C62 C3×A4 C62 C3×C6 C32 C6 C3 # reps 1 2 1 12 1 12 1 2 2 4 4 Matrix representation of C2×C324S4 in GL7(ℤ) -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 , 0 -1 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 1 , 0 -1 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 -1 , 0 -1 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 , 1 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 -1 0 G:=sub<GL(7,Integers())| [-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1],[0,1,0,0,0,0,0,-1,-1,0,0,0,0,0,0,0,-1,-1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,1],[0,1,0,0,0,0,0,-1,-1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,-1,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,-1],[0,1,0,0,0,0,0,-1,-1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0],[1,1,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,-1,-1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,-1,0] >; C2×C324S4 in GAP, Magma, Sage, TeX C_2\times C_3^2\rtimes_4S_4 % in TeX G:=Group("C2xC3^2:4S4"); // GroupNames label G:=SmallGroup(432,762); // by ID G=gap.SmallGroup(432,762); # by ID G:=PCGroup([7,-2,-2,-3,-3,-3,-2,2,170,675,2524,9077,2287,5298,3989]); // Polycyclic G:=Group<a,b,c,d,e|a^2=b^6=c^6=d^3=e^2=1,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,b*c=c*b,d*b*d^-1=b^4*c^3,e*b*e=b^2*c^3,d*c*d^-1=b^3*c,e*c*e=b^3*c^2,e*d*e=d^-1>; // generators/relations ׿ × 𝔽
2021-12-02 06:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982099533081055, "perplexity": 1412.8144494884234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00464.warc.gz"}
https://hiltmon.com/blog/2013/02/13/make-iterm-2-more-mac-like/
# Make iTerm 2 More Mac-like I just switched from using Apple’s built-in Terminal.app to the free iTerm 2 on a recommendation from Brett Terpstra. I’m already loving the hot-key profiles to launch uniquely colored remote sessions, the split panes, and the brilliant hotkey window (useful to run a single command and get rid of it). But there are a few things that needed some work. Some of the usual Mac editing keys did not work, I got rid of a few annoyances and added a few lovely preferences, and I needed the ability to create a new terminal window on the current space as part of my being productive with virtual desktops flow (Just like Browser Windows on All Desktops). ### Mac-like keys Since I’m not a vi or emacs pianist, I prefer standard Apple Cocoa Text bindings when editing the command line, so I set them up in iTerm 2’s Global Shortcut Keys in Preferences / Keys. The changes and settings are: • ⌥←: Go left one word (Send Escape Sequence | b) • ⌥→: Go right one word (Send Escape Sequence | f) • ⌘←: Go to start of line (Send Hex Code | 0x01) • ⌘→: Go to end of line (Send Hex Code | 0x05) • ⇧⌘↑: Up one page • ⇧⌘↓: Down one page • ⌃⌘↓: Down to bottom (not standard Cocoa, but I find it very useful when perusing real time rails logs) Thanks to Brett Terpstra for sharing some of these in his Option-arrow navigation in iTerm2 and Twitter. Note: I do not set the Left Option and Right Option keys in profiles to +ESC, I leave them as Normal (as per Scott Lee’s response in the comments). ### Annoyances and Preferences I don’t need to confirm quitting (Preferences / General): I love copy on select, it’s one less keystroke and I usually select with the mouse (Preferences / General): The red tabs were annoying, gone (Preferences / Appearance): Added the border around frames, I like this because my terminal background and screen backgrounds are both dark (Preferences / Appearance): And got rid of the bell icon and Growl notifications in all profiles (Preferences / Profiles / ** / Terminal): And lowered the line spacing to match Apple’s (Preferences / Profiles / ** / Text / Change Font) - just move the vertical back 1 notch: ### New iTerm 2 in Current Space While it’s nice to have the hotkey window, I often find myself working on Desktop 1 (Work) and need to jump to Desktop 2 (Alternate) to do some other stuff and leave a terminal running there. Like now, for example, I have a database migration running on Desktop 1 for Kifu and am blogging on Desktop 2, both of which require running iTerm 2 windows. If you hit ‘⌘N’ on iTerm 2 (or any other OS X app), OS X switches you to the app’s desktop, then creates a new window over there, not what I want. If you right-click on the dock and request a new window, it creates it on the current desktop. But I leave my dock hidden. I used to have ⌃⌘T mapped to do this for Terminal.app, so I created a Keyboard Maestro macro to do this for iTerm 2 instead: The first step, the AppleScript code, launches a new “Default” terminal, and it does so on the current space: The second step resizes the window so that it is 100 x 42 lines, my personally preferred terminal window size. So far, I am really enjoying the small touches that make iTerm 2 that much better than Terminal.app. Related Reading: Fast SSH Windows With iTerm 2
2018-06-23 13:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2797653079032898, "perplexity": 6698.924350769386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865081.23/warc/CC-MAIN-20180623132619-20180623152619-00233.warc.gz"}
https://www.physicsforums.com/threads/standard-model-feynman-diagrams.398232/
# Homework Help: Standard Model Feynman Diagrams 1. Apr 25, 2010 ### HairyFish I have spent a while trying to get to grips with the building blocks used for constructing feynman diagrams, below is my attempt at a set of reactions, how am I doing so far? I dont think $$e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}$$ can happen since a gluon only interacts on particles with a colour charge, but am unsure 2. Apr 25, 2010 ### vela Staff Emeritus I only looked at the top two so far, but they're both wrong. In the first one, you're changing the flavor of the neutrinos and the other leptons at each vertex, but the Z can't do that. The Z only couples to a particle and its antiparticle. In the second one, you have a photon coupling to a neutrino and a Z, neither of which has electric charge. Also, on the righthand side, you have four particles coming into a single vertex. You should only have three. 3. Apr 25, 2010 ### diazona There's a certain amount of intuition involved when drawing out these diagrams - for starters, you really have to know well which elementary vertices are allowed, and only work with those. In most of your diagrams I see vertices which aren't allowed, so I'm guessing you're new to learning this stuff? (It took me a while to get used to this too) The first thing I would suggest that you do when trying to draw the diagram for a reaction is figure out what kind of a process it is. For instance, any time neutrinos are involved, you know it's a weak interaction and there will most likely - no, wait, definitely - be a W boson exchanged. (Whenever you have a neutrino, it must go into or come out of a vertex that also has the corresponding lepton and a W boson.) Any time a quark changes flavor, or any time you produce (or destroy) two quarks with different flavors, again, there will be a W boson exchanged. As a matter of fact, weak interactions are the only ones that violate flavor conservation laws - that means they're the only ones that can turn a neutrino into a lepton or vice-versa, or that can turn a quark from one flavor into another, so they're often pretty easy to recognize. If it isn't a weak interaction, or if it is but you think there's more going on than that, next consider electromagnetic interactions. Any time you have a particle and its antiparticle annihilating, they produce a photon. (Well, unless they're different-colored quarks, then you get a gluon) Also, if you have elastic scattering of charged particles, a photon is responsible for that. Other than that, I guess you just have to get used to the rules... for what it's worth, the more practice like this you get, the better you'll know what is allowed and what isn't. 4. Apr 25, 2010 ### diazona I couldn't resist checking so: beyond what vela said, your $d\nu_e \to u e^-$ is wrong - remember what I said about neutrinos. $gg\to W^+W^-$ I'm not sure about... I think I might have figured out a diagram for it, but it's more complicated than what you've got. I had to make some assumptions about the colors of the gluons involved since they're not specified. Remember that gluons only interact with colored particles, and the W bosons carry no color charge. And finally, for $K^+\to\pi^0e^+\nu_e$, there is a diagram for that one. (Strangeness is indeed not conserved, so what kind of interaction is it?) The rest of them, as far as I can tell, seem right (or at least, I can't identify anything wrong with what you did). 5. Apr 26, 2010 ### HairyFish Thank You for taking the time to respond, its really appreciated! I am completely new to drawing Feynman diagrams and have spent ages trying to gather all the necessary rules/building blocks. I could not find a straight forward list of rules anywhere and what you guys have said is the best I have come across! I have had another go at the ones you said I got wrong. Something I am unsure about is: Does this mean you cannot have a W boson, an electron-neutrino and a positron all at the same vertex (as I have drawn in the second diagram but with a muon particles instead) Also, I still don't know how to go about the bottom left one, should I be creating some quarks then destroying them forming the W bosons? And the bottom right one, it is incomplete because I dont know how to form the pion or even what I have done is the right idea. 6. Apr 26, 2010 ### vela Staff Emeritus That's what Diazona said you must have: a lepton, its corresponding neutrino, and the W. I don't see a contradiction between your drawing and what he said. Yes. Gluons only couple to quarks, and quarks will couple to the W. You can probably draw it as a box diagram. You'll have a square loop with quarks along the edges, and the gluons and W connected at the corners. You have the right idea. The strange antiquark turns into an up antiquark when it emits the W. The up quark and antiquark then join to form the pion. 7. Apr 26, 2010 ### diazona Yep, your second drawing looks correct. Keep in mind that in Feynman diagrams, the only difference between an electron and a positron (or generally, between any particle and its antiparticle) is which way the line points. So if you had a vertex involving an electron neutrino, the associated lepton could be either an electron or a positron depending on how the vertex is oriented. Similarly, the W boson could have either charge depending on the orientation.
2018-06-22 11:28:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.705467939376831, "perplexity": 455.4703257327114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864391.61/warc/CC-MAIN-20180622104200-20180622124200-00237.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-3-factoring-trinomials-of-the-type-ax2-bx-c-5-3-exercise-set-page-326/47
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $(y+4)(2y-1)$ Grouping the first and second terms and the third and fourth terms, the given expression is equivalent to \begin{array}{l}\require{cancel} 2y^2+8y-y-4 \\\\= (2y^2+8y)-(y+4) .\end{array} Factoring the $GCF$ in each group results to \begin{array}{l}\require{cancel} 2y(y+4)-(y+4) .\end{array} Factoring the $GCF= (y+4)$ of the entire expression above results to \begin{array}{l}\require{cancel} (y+4)(2y-1) .\end{array}
2018-06-25 00:34:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441630244255066, "perplexity": 3413.2276137073127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867304.92/warc/CC-MAIN-20180624234721-20180625014721-00315.warc.gz"}
https://dash.harvard.edu/handle/1/3345930/browse?authority=c527449e0c575e1a9292856e0c61b8af&type=author
Now showing items 1-15 of 15 • #### Accounting Data, Market Values, and the Cross Section of Expected Returns Worldwide  (2015-06-08) Under fairly general assumptions, expected stock returns are a linear combination of two accounting fundamentals―book to market and ROE. Empirical estimates based on this relation predict the cross section of out-of-sample ... • #### Boardroom Centrality and Firm Performance  (2013) Firms with central or well-connected boards of directors earn superior risk-adjusted stock returns. Initiating a long position in the most central firms and a short position in the least central firms earns an average ... • #### The Cross Section of Expected Holding Period Returns and Their Dynamics: A Present Value Approach  (Elsevier, 2015) We provide a tractable model of firm-level expected holding period returns using two firm fundamentals—book-to-market ratio and ROE—and study the cross-sectional properties of the model-implied expected returns. We find ... • #### Economic Uncertainty and Earnings Management  (2016-03-30) In the presence of managerial short-termism and asymmetric information about skill and effort provision, firms may opportunistically shift earnings from uncertain to more certain times. We document that firms report more ... • #### Evaluating Firm-Level Expected-Return Proxies  (2014-11-06) We develop and implement a rigorous analytical framework for empirically evaluating the relative performance of firm-level expected-return proxies (ERPs). We show that superior proxies should closely track true expected ... • #### Golden Parachutes and the Wealth of Shareholders  (2014) Golden parachutes (GPs) have attracted substantial attention from investors and public officials for more than two decades. We find that GPs are associated with higher expected acquisition premiums and that this association ... • #### Governance through Shame and Aspiration: Index Creation and Corporate Behavior in Japan  (2017-09-08) We study how a stock index can affect corporate behavior by serving as a source of prestige. After decades of low corporate profitability in Japan, the JPX-Nikkei400 index was introduced in 2014. The index selected 400 ... • #### How Do Staggered Boards Affect Shareholder Value? Evidence from a Natural Experiment  (Elsevier, 2013) The well-established negative correlation between staggered boards (SBs) and firm value could be due to SBs leading to lower value or a reflection of low-value firms' greater propensity to maintain SBs. We analyze the ... • #### Measurement Errors of Expected Returns Proxies and the Implied Cost of Capital  (2013-05-22) This paper presents a methodology to study implied cost of capital's (ICC) measurement errors, which are relatively unstudied empirically despite ICCs' popularity as proxies of expected returns. By applying it to the popular ... • #### Reexamining staggered boards and shareholder value  (Elsevier BV, 2017) Cohen and Wang (2013) (CW2013) provide evidence consistent with market participants perceiving staggered boards to be value reducing. Amihud and Stoyanov (2016) (AS2016) contests these findings, reporting some specifications ... • #### Relative Performance Benchmarks: Do Boards Follow the Informativeness Principle?  (2017-03-23) Relative TSR (rTSR) is increasingly used by market participants to judge and incentivize managerial performance. We evaluate the efficacy, reasons, and implications of firms' benchmarks in rTSR-based contracts. Although ... • #### The Search for Benchmarks: When Do Crowds Provide Wisdom?  (2014-11-06) We compare the performance of a comprehensive set of alternative peer identification schemes used in economic benchmarking. Our results show the peer firms identified from aggregation of informed agents' revealed choices ... • #### Search-Based Peer Firms: Aggregating Investor Perceptions Through Internet Co-Searches  (Elsevier, 2015) Applying a "co-search" algorithm to Internet traffic at the SEC's EDGAR website, we develop a novel method for identifying economically-related peer firms and for measuring their relative importance. Our results show that ... • #### Short-Termism and Capital Flows  (2017-01-18) During the period 2005-2014, S&P 500 firms distributed to shareholders more than $3.95 trillion via stock buybacks and$2.45 trillion via dividends―\$6.4 trillion in total. These shareholder payouts amounted to over 93% of ... • #### Staggered Boards and Shareholder Value: A Reply to Amihud and Stoyanov  (2016-03-03) In a paper published in the JFE in 2013, we provided evidence that market participants perceive staggered boards to be on average value-reducing. In a recent response paper, Amihud and Stoyanov (2015) “contest” our results. ...
2020-04-05 10:56:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29878339171409607, "perplexity": 12509.853765054811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00516.warc.gz"}
https://www.physicsforums.com/threads/temperature-change.880881/
# Temperature change 1. Aug 3, 2016 1. The problem statement, all variables and given/known data in this question , it's not stated that whether the temperature change from 20°C to -20°C or -20°C to 20°C . I'm confused... 2. Relevant equations 3. The attempt at a solution I think it should be changing from 20°C to -20°C so delta T = (-20-20) = -40°C , am i right ? File size: 27 KB Views: 38 2. Aug 3, 2016 ### drvrm the limiting stress is given at -20 degree so if you use it one will have to go to that temp. therefore the temp change is of 40 degrees. 3. Aug 3, 2016 do you mean the temperature goes from -20 to 20 ? if so , the temperature change is -40°C , am i right ? 4. Aug 3, 2016 ### drvrm The rod was kept at 20 degrees and from there it has been moved to -20 degrees so the change in temperature is in two parts +20 to zero and zero to -20 degrees so the total change is of 40 degrees (change in temperature is a number the negative and positive signs of a temp. only shows its position on a scale... .in both the parts of the above change the stress is increasing and going to add up to the limit.... given . If you wish to say it that change is -40 degrees and put the negative sign in your relation for the limiting stress one will make an error as it will denote a reduction...which may not be the the physical situation! 5. Aug 3, 2016 ### Staff: Mentor Yes, you are right, the temperature change is -40 degrees. Would that tend to make the rod shorter if it were free to contract? So, to keep the rod the same length, would the stress in the rod have to increase or decrease? 6. Aug 3, 2016 The rod shorter if it were free to contract , stress in the rod have to increase to keep the rod same length 7. Aug 3, 2016 ### Staff: Mentor So, from our famous equation $\sigma=E(\epsilon-\alpha \Delta T)$, if $\epsilon = 5000/(AE)$ and $\Delta T = -40$, what do you get for $\sigma.$? 8. Aug 3, 2016 ok , solved 9. Aug 3, 2016 i dont really understand , how could the εs = positive ? the εs = strain ? so , the steel rod contract , εs = negative ? 10. Aug 3, 2016 ### Staff: Mentor The steel rod was stretched initially and then, when it was cooled, it was not allowed to contract. So its strain remained positive.
2017-12-12 07:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44593411684036255, "perplexity": 1302.583614010829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00537.warc.gz"}
http://math.stackexchange.com/questions/679882/multiplication-on-a-kg-n
# Multiplication on a K(G,n) Suppose that, given an abelian group $G$, there is a multiplication map $\mu:K(G,n)\times K(G,n) \to K(G,n)$ defined such that the induced map on the homotopy group $\mu_*:\pi_n(K(G,n) \times K(G,n)) \to \pi_n(K(G,n))$ takes $(g_1,g_2)$ to $g_1 + g_2$, where $+$ is the operation on $G$. Does it follow that this multiplication is homotopy-commutative; that is, if $t:K(G,n) \times K(G,n) \to K(G,n) \times K(G,n)$ switches coordinates, does it follow that $\mu t$ is homotopic to $\mu$? Since $G$ is commutative, it seems that $\mu$ should be, but I'm having a hard time coming up with the actual homotopy. I know that is NOT true in general that if two maps induce the same homomorphisms on homotopy groups, then they are homotopic. One could also ask if the fact that the operation on $G$ is associative implies that $\mu$ is homotopy-associative. - Yes, it follows that $\mu t$ is homotopic to $t$. In general we have the following result: Lemma: Suppose that $G$ and $H$ are abelian groups and $f,g: K(G,n) \to K(H,n)$ are maps. Then $f_* = g_*: \pi_n(K(G,n))=G \to \pi_n(K(H,n))$ if and only if $f$ is homotopic to $g$. This follows easily from the following: Lemma: Let $X$ be an $(n-1)$-connected CW-complex with $\pi_1(X)$ abelian then the map $\eta: H^n(X;G) \to \text{Hom}(\pi_n(X),G)$ given by $\eta[f] = f_*: \pi_n(X) \to \pi_n(K(G,n)) = G$ is an isomorphism. Here we use that $H^n(X;G) \cong [X, K(G,n)]$ where $[ , ]$ denotes homotopy classes of maps. A reference for this last lemma is Arkowitz - Introduction to Homotopy Theory. It is Lemma 2.5.13 - To apply your lemma are you asserting that $K(G,n)\times K(G,n)$ is a $K(H,n)$? –  Kevin Carlson Feb 17 at 21:33 @Kevin Carlson: In general $\pi_n(X \times Y) \cong \pi_n(X) \times \pi_n(Y)$. In particular, for Eilenberg-Mac Lane spaces we get that $K(G,n) \times K(H,n)$ has the homotopy type of a $K(G \times H,n)$. –  R. Frankhuizen Feb 17 at 21:41 Of course. Thanks. –  Kevin Carlson Feb 17 at 21:42
2014-11-23 11:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827533960342407, "perplexity": 99.55694545343368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379462.60/warc/CC-MAIN-20141119123259-00253-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.r-bloggers.com/2020/09/the-fastest-way-to-read-and-writes-file-in-r/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. ## Compare Read and Write files time When we are dealing with large datasets, and we need to write many csv files or when the csv filethat we hand to read is huge, then the speed of the read and write command is important. We will compare the required time to write and read files of the following cases: ## Compare the Write times We will work with a csv file of 1M rows and 10 columns which is approximately 180MB. Let’s create the sample data frame and write it to the hard disk. We will generate 10M observations from the Normal Distribution library(data.table) library(microbenchmark) library(ggplot2) # create a 1M X 10 data frame my_df<-data.frame(matrix(rnorm(1000000*10), 1000000,10)) # base system.time({ write.csv(my_df, "base.csv", row.names=FALSE) }) # data.table system.time({ fwrite(my_df, "datatable.csv") }) As we can see from the elapsed time, the fwrite from the data.table is ~70 times faster than the base package and ~7times faster than the readr Let’s compare also the read times using the microbenchmark package. tm <- microbenchmark(read.csv("datatable.csv"), times = 10L ) tm autoplot(tm) As we can see, again the fread from the data.table package is around 40 times faster than the base package and 8.5 times faster than the read_csv from the readr package. ## Conclusion If you want to read and write files fastly then you should choose the data.table package.
2020-10-29 14:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4899069368839264, "perplexity": 2176.6746710807934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00601.warc.gz"}
https://ask.sagemath.org/answers/52270/revisions/
# Revision history [back] Here is a possibility to work. Let $A$ be a ring such that $2$ is invertible, for example $\Bbb Z/3$ or $\Bbb Z/9$. We denote by $\frac 12$ its inverse. Assume $2$ is not a square in $A$. (This is the case in the above examples.) Consider $A[a]$ to be the ring $A[Y]/(Y^2-2)$, where $A[Y]$ is the polynomial ring in $Y$ over $A$. (And $a$ is $Y$ modulo the ideal generated by $(Y^2-2)$. So formally, $a=\sqrt2$. Now observe that the equation satisfied by the $x$ element in the OP, $(x+a)^2=2(x+a)$, can be written as $$(x+a)(x+a-2)=0\ .$$ The two factors are relatively prime, in fact $$\frac 12(x+a)-\frac 12(x+a-2) = 1\ .$$ This means that the ring $$R=A[a][X]\ /\ (\ (X+a)(X+a-2)\ )$$ is isomorphic to two copies of $A[a]$ via the map $$R\to A[a]\times A[a]\ ,\ f(X)\to(f(-a), f(2-a))\ .$$ The inverse map takes $(s,t)\in A[a]\times A[a]$ and maps it into $\frac 12t(X+a)-\frac 12s(X+a-2)$. Now we have to solve $Z^2=1$ in the ring $A[a]\times A[a]$. This can be done also manually. An element of the shape $Z=u+va\in A[a]$, $u,v\in A$ satisfies $Z^2 = 1$ iff $(u^2+2v^2)+2auv=1$. In case $A$ has zero divisors we have to take care of $uv=0$ somehow. The possible values for $u,v$ that may lead to a solution satisfy at any rate $u^3=u$ and $2v^3=v$. Together with $Z$, also its "conjugate" $\bar Z=u-va$ is also a solution, and the "norm" $N(Z)=Z\bar Z=(u+va)(u-va)=u^2-2v^2$ is $1$. So it is a good idea to search for solutions of this "Pell equation" over $A$. Let us now write some lines of code for the given case. The brute force search is: r = Zmod(9) R.<a,x> = PolynomialRing(r, order='lex') J = R.ideal( [a^2-2, (x+a)^2-2*(x+a)] ) S = R.quotient(J) for [s, t, u] in cartesian_product([r, r, r]): Z = S(s + t*a + u*x) if Z^2 == S(1): print(f"Z = {s} + {t} a + {u} x") Results: Z = 1 + 0 a + 0 x Z = 1 + 8 a + 8 x Z = 8 + 0 a + 0 x Z = 8 + 1 a + 1 x This fits with the situation of finding all points $Z=(Z_1,Z_2)$ over the ring $R=A[a]=\Bbb Z/9[a]=\Bbb Z/9[\sqrt 2]$ with $Z^2=(1,1)=1_R$. sage: r = Integers(9) sage: R.<Y> = r[] sage: Q.<a> = R.quotient(Y^2-2) sage: a^2 2 sage: for r1, r2 in cartesian_product([r, r]): ....: Z1 = r1 + r2*a ....: if Z1^2 == Q(1): ....: print(Z1) ....: 1 8 These are the only Hensel lifts of the corresponding units over $\Bbb Z/3$: sage: U.<A> = PolynomialRing(GF(3)) sage: F.<a> = GF(3^2, modulus = A^2-2) sage: a^2 2 sage: [ f for f in F if f^2 == F(1) ] [2, 1] sage: # of course, this is a field...
2021-09-24 03:22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971624970436096, "perplexity": 380.9588927737633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00674.warc.gz"}
http://openstudy.com/updates/5096a8d0e4b0d0275a3cec39
## henpen Group Title Why the absolute value signs in $\int \frac{1}{x} dx=ln( | x| )+c$ ? one year ago one year ago 1. henpen What's wrong with complex numbers? 2. cdelomas How would i factor the expression 3. henpen Quoi? 4. cdelomas what 5. henpen What? 6. cdelomas how would i factor the expression 7. cdelomas in the question 8. henpen What do you mean? Factorisation is unnecessary here 9. henpen $\int \frac{1}{x} dx=ln( | x| )+c$, as the LaTex in the question isn't working for me
2014-10-25 21:22:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8463413715362549, "perplexity": 10755.496232365398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00124-ip-10-16-133-185.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Life_table
Life table 2003 US mortality table, Table 1, Page 1 In actuarial science and demography, a life table (also called a mortality table or actuarial table) is a table which shows, for each age, what the probability is that a person of that age will die before his or her next birthday ("probability of death"). From this starting point, a number of inferences can be derived. Life tables are also used extensively in biology and epidemiology. The concept is also of importance in product life cycle management. Background There are two types of life tables: • Period or static life tables show the current probability of death (for people of different ages, in the current year) • Cohort life tables show the probability of death of people from a given cohort (especially birth year) over the course of their lifetime. Static life tables sample individuals assuming a stationary population with overlapping generations. "Static Life tables" and "cohort life tables" will be identical if population is in equilibrium and environment does not change. "Life table" primarily refers to period life tables, as cohort life tables can only be constructed using data up to the current point, and distant projections for future mortality. Life tables can be constructed using projections of future mortality rates, but more often they are a snapshot of age-specific mortality rates in the recent past, and do not necessarily purport to be projections. For these reasons, the older ages represented in a life table may have a greater chance of not being representative of what lives at these ages may experience in future, as it is predicated on current advances in medicine, public health, and safety standards that did not exist in the early years of this cohort. Life tables are usually constructed separately for men and for women because of their substantially different mortality rates. Other characteristics can also be used to distinguish different risks, such as smoking status, occupation, and socioeconomic class. Life tables can be extended to include other information in addition to mortality, for instance health information to calculate health expectancy. Health expectancies such as disability-adjusted life year and Healthy Life Years are the remaining number of years a person can expect to live in a specific health state, such as free of disability. Two types of life tables are used to divide the life expectancy into life spent in various states: • Multi-state life tables (also known as increment-decrements life tables) are based on transition rates in and out of the different states and to death • Prevalence-based life tables (also known as the Sullivan method) are based on external information on the proportion in each state. Life tables can also be extended to show life expectancies in different labor force states or marital status states. Insurance applications In order to price insurance products, and ensure the solvency of insurance companies through adequate reserves, actuaries must develop projections of future insured events (such as death, sickness, and disability). To do this, actuaries develop mathematical models of the rates and timing of the events. They do this by studying the incidence of these events in the recent past, and sometimes developing expectations of how these past events will change over time (for example, whether the progressive reductions in mortality rates in the past will continue) and deriving expected rates of such events in the future, usually based on the age or other relevant characteristics of the population. These are called mortality tables if they show death rates, and morbidity tables if they show various types of sickness or disability rates. The availability of computers and the proliferation of data gathering about individuals has made possible calculations that are more voluminous and intensive than those used in the past (i.e. they crunch more numbers) and it is more common to attempt to provide different tables for different uses, and to factor in a range of non-traditional behaviors (e.g. gambling, debt load) into specialized calculations utilized by some institutions for evaluating risk. This is particularly the case in non-life insurance (e.g. the pricing of motor insurance can allow for a large number of risk factors, which requires a correspondingly complex table of expected claim rates). However the expression "life table" normally refers to human survival rates and is not relevant to non-life insurance. The mathematics tpx chart from Table 1. Life table for the total population: United States, 2003, Page 8 The basic algebra used in life tables is as follows. • $\,q_x$: the probability that someone aged exactly $\,x$ will die before reaching age $\,(x+1)$. • $\,p_x$: the probability that someone aged exactly $\,x$ will survive to age $\,(x+1)$. $\,p_x = 1-q_x$ • $\,l_x$: the number of people who survive to age $\,x$ note that this is based on a radix.,[1] or starting point, of $\,l_0$ lives, typically taken as 100,000 $\,l_{x + 1} = l_x \cdot (1-q_x) = l_x \cdot p_x$ $\,{l_{x + 1} \over l_x} = p_x$ • $\,d_x$: the number of people who die aged $\,x$ last birthday $\,d_x = l_x-l_{x+1} = l_x \cdot (1-p_x) = l_x \cdot q_x$ • $\,{}_tp_x$: the probability that someone aged exactly $\,x$ will survive for $\,t$ more years, i.e. live up to at least age $\,x+t$ years $\,{}_tp_x = {l_{x+t} \over l_x}$ • $\,{}_{t|k}q_x$: the probability that someone aged exactly $\,x$ will survive for $\,t$ more years, then die within the following $\,k$ years $\,{}_{t|k}q_x = {}_t p_x \cdot {}_k q_{x+t} = {l_{x+t} - l_{x+t+k} \over l_x}$ • μx : the force of mortality, i.e. the instantaneous mortality rate at age x, i.e. the number of people dying in a short interval starting at age x, divided by lx and also divided by the length of the interval. Another common variable is • $\,m_x$ This symbol refers to Central rate of mortality. It is approximately equal to the average force of mortality, averaged over the year of age. Ending a Mortality Table In practice, it is useful to have an ultimate age associated with a mortality table. Once the ultimate age is reached, the mortality rate is assumed to be 1.000. This age may be the point at which life insurance benefits are paid to a survivor or annuity payments cease. Four methods can be used to end mortality tables:[2] • The Forced Method: Select an ultimate age and set the mortality rate at that age equal to 1.000 without any changes to other mortality rates. This creates a discontinuity at the ultimate age compared to the penultimate and prior ages. • The Blended Method: Select an ultimate age and blend the rates from some earlier age to dovetail smoothly into 1.000 at the ultimate age. • The Pattern Method: Let the pattern of mortality continue until the rate approaches or hits 1.000 and set that as the ultimate age. • The Less-Than-One Method: This is a variation on the Forced Method. The ultimate mortality rate is set equal to the expected mortality at a selected ultimate age, rather 1.000 as in the Forced Method. This rate will be less than 1.000. Epidemiology In epidemiology and public health, both standard life tables to calculate life expectancy and Sullivan and multistate life tables to calculate health expectancy are commonly used. The latter include information on health in addition to mortality.
2014-03-07 22:21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51247239112854, "perplexity": 1413.2798083270832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651529/warc/CC-MAIN-20140305060731-00056-ip-10-183-142-35.ec2.internal.warc.gz"}
http://electrochemical.asmedigitalcollection.asme.org/article.aspx?articleid=1472043
0 Research Papers # Impact of the Temperature Profile on Thermal Stress in a Tubular Solid Oxide Fuel Cell [+] Author and Article Information Katharina Fischer Institute of Turbomachinery and Fluid Dynamics, Leibniz Universität Hannover, Appelstrasse 9, D-30167 Hannover, Germanyk.fischer@tfd.uni-hannover.de Joerg R. Seume Institute of Turbomachinery and Fluid Dynamics, Leibniz Universität Hannover, Appelstrasse 9, D-30167 Hannover, Germanyseume@tfd.uni-hannover.de J. Fuel Cell Sci. Technol 6(1), 011017 (Nov 12, 2008) (9 pages) doi:10.1115/1.2971132 History: Received June 14, 2007; Revised May 10, 2008; Published November 12, 2008 ## Abstract The knowledge of the stress distribution in the ceramic components of a solid oxide fuel cell (SOFC) is a prerequisite for assessing the risk of failure due to crack formation as well as for predicting its durability. Due to the high temperature span associated with thermal cycles, high thermal gradients, and the mismatch of thermal and mechanical properties of the ceramic components, thermomechanical stress is of particular importance in SOFC. A finite-element mechanical model of a tubular SOFC is developed and combined with a 2D thermo-electrochemical model in order to provide realistic temperature profiles to the finite-element analysis of the ceramic SOFC membrane-electrode assembly (MEA). The resulting simulation tool is employed for three different analyses: In the first analysis, temperature profiles provided by the thermo-electrochemical model are used to show the impact of direct versus indirect internal reformation of methane on thermomechanical stress in the MEA. In order to clarify the contribution of temperature level and thermal gradients to the emergence of stress, the second analysis systematically investigates the stress distribution with assumed temperature profiles. In the third analysis, particular attention is given to the influence of thermal model accuracy on the results. For this purpose, three modeling cases are provided: (i) Heat sources resulting from the anodic and cathodic half-reactions are considered separately in thermal modeling. (ii) According to a frequently used simplification in SOFC modeling, all heat released by the reaction of hydrogen and oxygen is assigned to the anode/electrolyte interface. (iii) The temperature profile is averaged in the radial direction. The results reveal a strong dependence of thermomechanical stress on the methane reforming strategy, which confirms the importance of a careful control of operating conditions. The effect of temperature level on maximum tensile thermomechanical stress is found to dominate by one order of magnitude over that of typical thermal gradients occurring in the SOFC during operation. In contrast to the high relevance commonly ascribed to thermal gradients, the results show that in the tubular SOFC thermal gradients play only a minor role for the emergence of stress. Concerning model accuracy, the separate consideration of half-reactions at the electrodes is found to be not necessary, while the results clearly emphasize the importance of radially discretized thermal modeling for the model-based prediction of thermal stress. <> ## Figures Figure 1 Geometry and dimensions of the tubular SOFC (adopted from Ref. 13) Figure 2 Longitudinal distributions of reactants, temperature, and thermal gradients for operation on completely reformed (left) and partially reformed natural gas (right) Figure 3 Distribution of thermomechanical stress for operation on completely (a) and partially (b) reformed natural gas Figure 4 Implementation of the longitudinal thermal gradient (case (v)) in the finite-element mechanical SOFC model Figure 5 Distribution of thermomechanical stress in the MEA subject to a homogeneous temperature T=298 K (i), a radial thermal gradient δT/δr=500 K/m (iii) or δT/δr=−5000 K/m (iv), and a longitudinal thermal gradient δT/δz=500 K/m (v) Figure 6 Maximum tensile stress caused by the assumed temperature profiles and in a real operating point Figure 7 Radial temperature profiles calculated with (solid line) and without (dashed line) a separate inclusion of the single-electrode reactions compared with the radial mean temperature (dotted line) Figure 8 Radial temperature gradients in an internally reforming SOFC calculated with (i) and without (ii) a separate consideration of the single-electrode reactions ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2017-07-27 04:33:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.334673672914505, "perplexity": 3270.09095168247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00676.warc.gz"}
http://soft-matter.seas.harvard.edu/index.php?title=Percolation_Model_for_Slow_Dynamics_in_Glass-Forming_Materials&diff=next&oldid=15929&printable=yes
# Difference between revisions of "Percolation Model for Slow Dynamics in Glass-Forming Materials" Glassy systems exhibit several unique properties. During a glass transition, the structural relaxation time increases by several orders of magnitude. Also, the structural correlations display an anomalous stretched-exponential time decay: $exp(-t/\tau_{\alpha})^{\beta}$, where $\beta$ is called the stretching exponent, and $\tau_{\alpha}$ is called the $\alpha$-relaxation time.
2020-09-19 13:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086316347122192, "perplexity": 2209.2641775590623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00377.warc.gz"}
https://juliagraphics.github.io/Gtk.jl/latest/manual/layout/
# Layout You will usually want to add more than one widget to you application. To this end, Gtk provides several layout widgets. Instead of using a precise positioning, the Gtk layout widgets take an approach where widgets are aligned in boxes and tables. Note While doing the layout using Julia code is possible for small examples it is in general advised to instead create the layout using Glade in combination with GtkBuilder Builder and Glade. ## Box The most simple layout widget is the GtkBox. It can be either be horizontally or vertical aligned. It allow to add an arbitrary number of widgets. win = GtkWindow("New title") hbox = GtkBox(:h) # :h makes a horizontal layout, :v a vertical layout push!(win, hbox) cancel = GtkButton("Cancel") ok = GtkButton("OK") push!(hbox, cancel) push!(hbox, ok) showall(win) We can address individual "slots" in this container: julia> length(hbox) 2 julia> get_gtk_property(hbox[1], :label, String) "Cancel" julia> get_gtk_property(hbox[2], :label, String) "OK" This layout may not be exactly what you'd like. Perhaps you'd like the ok button to fill the available space, and to insert some blank space between them: set_gtk_property!(hbox,:expand,ok,true) set_gtk_property!(hbox,:spacing,10) The first line sets the expand property of the ok button within the hbox container. Note that these aren't evenly-sized, and that doesn't change if we set the cancel button's expand property to true. ButtonBox is created specifically for this purpose, so let's use it instead: destroy(hbox) ok = GtkButton("OK") cancel = GtkButton("Cancel") hbox = GtkButtonBox(:h) push!(win, hbox) push!(hbox, cancel) push!(hbox, ok) showall(win) Now we get this: which may be closer to what you had in mind. ## Grid More generally, you can arrange items in a grid: win = GtkWindow("A new window") g = GtkGrid() a = GtkEntry() # a widget for entering text set_gtk_property!(a, :text, "This is Gtk!") b = GtkCheckButton("Check me!") c = GtkScale(false, 0:10) # a slider # Now let's place these graphical elements into the Grid: g[1,1] = a # Cartesian coordinates, g[x,y] g[2,1] = b g[1:2,2] = c # spans both columns set_gtk_property!(g, :column_homogeneous, true) set_gtk_property!(g, :column_spacing, 15) # introduce a 15-pixel gap between columns push!(win, g) showall(win) The g[x,y] = obj assigns the location to the indicated x,y positions in the grid (note that indexing is Cartesian rather than row/column; most graphics packages address the screen using Cartesian coordinates where 0,0 is in the upper left). A range is used to indicate a span of grid cells. By default, each row/column will use only as much space as required to contain the objects, but you can force them to be of the same size by setting properties like column_homogeneous.
2023-03-26 06:42:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3752390146255493, "perplexity": 5003.395207039762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00031.warc.gz"}
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Introductory_Statistics_(Lane)/3%3A_Summarizing_Distributions/3.04%3A_Balance_Scale_Simulation
# 3.4: Balance Scale Simulation Skills to Develop • Understand what it means for a distribution to balance on a fulcrum • Learn which measure of central tendency will balance a distribution ## Instructions This demonstration allows you to change the shape of a distribution and see the point at which the distribution would balance. The graph in the right panel is a histogram of $$600$$ scores. The mean and median are equal to $$8$$ and are indicated by small vertical bars on the $$X$$-axis The top portion of the bar is in blue and represents the mean. The bottom portion is in pink and represents the median. The mean and median are also shown to the left of the $$Y$$-axis. You can see that the histogram is balanced on the tip of the triangle (the fulcrum). You can change the shape of the histogram by painting with the mouse. Notice that the triangle beneath the X-axis automatically moves to the point where the histogram is balanced. Experiment with different shapes and see if you can determine whether there is a relationship between the mean, median, and/or the mode and the location of the balance point. ## Illustrated Instructions Below is a screen shot of the simulaton's beginning screen. Note that the distribution is balanced on the fulcrum. The mean and median are shown to the left and also as small vertical bars below the $$X$$-axis. The mean is in blue and the median is in pink. The next figure illustrates this more clearly. Figure $$\PageIndex{1}$$: Beginning Screen of the Simulation You can change the distribution by painting with the mouse when running the simulation.Below is an example of the distribution after it has been changed. Note that the mean and median are marked by vertical lines. Figure $$\PageIndex{2}$$: Screen of the Simulation after change ## Contributor • Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
2019-09-17 05:00:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4608023464679718, "perplexity": 492.7222171087676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00315.warc.gz"}
https://lw2.issarice.com/posts/iyRpsScBa6y4rduEt/model-combination-and-adjustment
post by lukeprog · 2013-07-17T20:31:08.687Z · score: 60 (57 votes) · LW · GW · Legacy · 37 comments Contents Inside and outside views: a quick review Multiple reference classes Against "the outside view" None The debate on the proper use of inside and outside views has raged for some time now. I suggest a way forward, building on a family of methods commonly used in statistics and machine learning to address this issue — an approach I'll call "model combination and adjustment." Inside and outside views: a quick review 1. There are two ways you might predict outcomes for a phenomenon. If you make your predictions using a detailed visualization of how something works, you're using an inside view. If instead you ignore the details of how something works, and instead make your predictions by assuming that a phenomenon will behave roughly like other similar phenomena, you're using an outside view (also called reference class forecasting). Inside view examples: • "When I break the project into steps and visualize how long each step will take, it looks like the project will take 6 weeks" • "When I combine what I know of physics and computation, it looks like the serial speed formulation of Moore's Law will break down around 2005, because we haven't been able to scale down energy-use-per-computation as quickly as we've scaled up computations per second, which means the serial speed formulation of Moore's Law will run into roadblocks from energy consumption and heat dissipation somewhere around 2005." Outside view examples: • "I'm going to ignore the details of this project, and instead compare my project to similar projects. Other projects like this have taken 3 months, so that's probably about how long my project will take." • "The serial speed formulation of Moore's Law has held up for several decades, through several different physical architectures, so it'll probably continue to hold through the next shift in physical architectures." See also chapter 23 in Kahneman (2011); Planning Fallacy; Reference class forecasting. Note that, after several decades of past success, the serial speed formulation of Moore's Law did in fact break down in 2004 for the reasons described (Fuller & Millett 2011). 2. An outside view works best when using a reference class with a similar causal structure to the thing you're trying to predict. An inside view works best when a phenomenon's causal structure is well-understood, and when (to your knowledge) there are very few phenomena with a similar causal structure that you can use to predict things about the phenomenon you're investigating. See: The Outside View's Domain. When writing a textbook that's much like other textbooks, you're probably best off predicting the cost and duration of the project by looking at similar textbook-writing projects. When you're predicting the trajectory of the serial speed formulation of Moore's Law, or predicting which spaceship designs will successfully land humans on the moon for the first time, you're probably best off using an (intensely informed) inside view. 3. Some things aren't very predictable with either an outside view or an inside view. Sometimes, the thing you're trying to predict seems to have a significantly different causal structure than other things, and you don't understand its causal structure very well. What should we do in such cases? This remains a matter of debate. Eliezer Yudkowsky recommends a weak inside view for such cases: On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the Outside View beats the Inside View... [But] on problems that are new things under the Sun, where there's a huge change of context and a structural change in underlying causal forces, the Outside View also fails - try to use it, and you'll just get into arguments about what is the proper domain of "similar historical cases" or what conclusions can be drawn therefrom. In this case, the best we can do is use the Weak Inside View — visualizing the causal process — to produce loose qualitative conclusions about only those issues where there seems to be lopsided support. In contrast, Robin Hanson recommends an outside view for difficult cases: It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions. To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot. When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near. Far is just the wrong place to try new things. There are a bazillion possible abstractions we could apply to the world. For each abstraction, the question is not whether one can divide up the world that way, but whether it "carves nature at its joints", giving useful insight not easily gained via other abstractions. We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby. In Yudkowsky (2013), sec. 2.1, Yudkowsky offers a reply to these paragraphs, and continues to advocate for a weak inside view. He also adds: the other major problem I have with the “outside view” is that everyone who uses it seems to come up with a different reference class and a different answer. This is the problem of "reference class tennis": each participant in the debate claims their own reference class is most appropriate for predicting the phenomenon under discussion, and if disagreement remains, they might each say "I’m taking my reference class and going home." Responding to the same point made elsewhere, Robin Hanson wrote: [Earlier, I] warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. Multiple reference classes Yudkowsky (2013) adds one more complaint about reference class forecasting in difficult forecasting circumstances: A final problem I have with many cases of 'reference class forecasting' is that... [the] final answers [generated from this process] often seem more specific than I think our state of knowledge should allow. [For example,] I don’t think you should be able to tell me that the next major growth mode will have a doubling time of between a month and a year. The alleged outside viewer claims to know too much, once they stake their all on a single preferred reference class. Both this comment and Hanson's last comment above point to the vulnerability of relying on any single reference class, at least for difficult forecasting problems. Beware brittle arguments, says Paul Christiano. One obvious solution is to use multiple reference classes, and weight them by how relevant you think they are to the phenomenon you're trying to predict. Holden Karnofsky writes of investigating things from "many different angles." Jonah Sinick refers to "many weak arguments." Statisticians call this "model combination." Machine learning researchers call it "ensemble learning" or "classifier combination." In other words, we can use many outside views. Nate Silver does this when he predicts elections (see Silver 2012, ch. 2). Venture capitalists do this when they evaluate startups. The best political forecasters studied in Tetlock (2005), the "foxes," tended to do this. In fact, most of us do this regularly. How do you predict which restaurant's food you'll most enjoy, when visiting San Francisco for the first time? One outside view comes from the restaurant's Yelp reviews. Another outside view comes from your friend Jade's opinion. Another outside view comes from the fact that you usually enjoy Asian cuisines more than other cuisines. And so on. Then you combine these different models of the situation, weighting them by how robustly they each tend to predict your eating enjoyment, and you grab a taxi to Osha Thai. (Technical note: I say "model combination" rather than "model averaging" on purpose.) You can probably do even better than this, though — if you know some things about the phenomenon and you're very careful. Once you've combined a handful of models to arrive at a qualitative or quantitative judgment, you should still be able to "adjust" the judgment in some cases using an inside view. For example, suppose I used the above process, and I plan to visit Osha Thai for dinner. Then, somebody gives me my first taste of the Synsepalum dulcificum fruit. I happen to know that this fruit contains a molecule called miraculin which binds to one's tastebuds and makes sour foods taste sweet, and that this effect lasts for about an hour (Koizumi et al. 2011). Despite the results of my earlier model combination, I predict I won't particularly enjoy Osha Thai at the moment. Instead, I decide to try some tabasco sauce, to see whether it now tastes like doughnut glaze. In some cases, you might also need to adjust for your prior over, say, "expected enjoyment of restaurant food," if for some reason your original model combination procedure didn't capture your prior properly. Against "the outside view" There is a lot more to say about model combination and adjustment (e.g. this), but for now let me make a suggestion about language usage. Sometimes, small changes to our language can help us think more accurately. For example, gender-neutral language can reduce male bias in our associations (Stahlberg et al. 2007). In this spirit, I recommend we retire the phrase "the outside view..", and instead use phrases like "some outside views..." and "an outside view..." My reasons are: 1. Speaking of "the" outside view privileges a particular reference class, which could make us overconfident of that particular model's predictions, and leave model uncertainty unaccounted for. 2. Speaking of "the" outside view can act as a conversation-stopper, whereas speaking of multiple outside views encourages further discussion about how much weight each model should be given, and what each of them implies about the phenomenon under discussion. comment by MalcolmOcean (malcolmocean) · 2013-07-14T06:08:45.167Z · score: 22 (21 votes) · LW(p) · GW(p) I appreciate the snippets from EY's papers, which I don't read, because it's interesting to know what he's writing about more formally. I found the review mostly seemed like stuff I already know, although in returning to it I noticed that it did contain some new terminology around reference classes. But this: For example, gender-neutral language can reduce male bias in our associations (Stahlberg et al. 2007). In this spirit, I recommend we retire the phrase "the outside view..", and instead use phrases like "some outside views..." and "an outside view..." Is really good. I mean, along with the general recommendation to use multiple reference classes. I guess my point is that the article is made possibly twice as awesome by the inclusion of this part, as it dramatically increases the probability that this will catch on memetically. comment by Technoguyrob · 2013-07-21T22:44:37.810Z · score: 8 (8 votes) · LW(p) · GW(p) In the mathematical theory of Galois representations, a choice of algebraic closure of the rationals and an embedding of this algebraic closure in the complex numbers (e.g. section 5) is usually necessary to frame the background setting, but I never hear "the algebraic closure" or "the embedding," instead "an algebraic closure" and "an embedding." Thus I never forget that a choice has to be made and that this choice is not necessarily obvious. This is an example from mathematics where careful language is helpful in tracking background assumptions. comment by shminux · 2013-08-20T19:22:04.049Z · score: 1 (1 votes) · LW(p) · GW(p) This is an example from mathematics where careful language is helpful in tracking background assumptions. I wonder how the mathematicians speaking article-free languages deal with it, given that they lack a non-cumbersome linguistic construct to express this potential ambiguity. comment by lukeprog · 2013-08-20T18:49:53.330Z · score: 1 (1 votes) · LW(p) · GW(p) Thanks for sharing this. comment by Kaj_Sotala · 2013-07-14T06:27:13.690Z · score: 8 (8 votes) · LW(p) · GW(p) Thank you for highlighting that passage - by the time the text got to that point, I had already decided that this article wasn't telling me anything new and had started skimming, and missed that paragraph as a result. It is indeed very good. comment by [deleted] · 2013-07-14T05:17:19.995Z · score: 6 (6 votes) · LW(p) · GW(p) It's an interesting exercise to look for the Bayes structure in this (and other) advice. At least I find it helpful to tie things down to the underlying theory. Otherwise I find it easy to misinterpret things. Good article. comment by lukeprog · 2013-07-14T05:33:59.549Z · score: 16 (16 votes) · LW(p) · GW(p) It's an interesting exercise to look for the Bayes structure in this (and other) advice. Yup! Practical advice is best when it's backed by deep theories. Monteith et al. (2011) (linked in the OP) is an interesting read on the subject. They discuss a puzzle: why does the theoretically optimal Bayesian method for dealing with multiple models (that is, Bayesian model averaging) tend to underperform ad-hoc methods (e.g. "bagging" and "boosting") in empirical tests? It turns out that "Bayesian model averaging struggles in practice because it accounts for uncertainty about which model is correct but still operates under the assumption that only one of them is." The solution is simply modify the Bayesian model averaging process so that it integrates over combinations of models rather than over individual models. (They call this Bayesian model combination, to distinguish it from "normal" Bayesian model averaging.) In their tests, Bayesian model combination beats out bagging, boosting, and "normal" Bayesian model averaging. comment by [deleted] · 2013-07-14T15:54:55.329Z · score: 11 (11 votes) · LW(p) · GW(p) Bayesian model averaging struggles in practice because it accounts for uncertainty about which model is correct but still operates under the assumption that only one of them is. Wait, what? That sounds significant. What does more than one model being correct mean? Speculation before I read the paper: I guess that's like modelling a process as the superposition of sub-processes? That would give the model more degrees of freedom with which to fit the data. Would we expect that to do strictly better than the mutual exclusion assumption, or does it require more data to overcome the degrees of freedom? • If a single theory is correct, the mutex assumption will update toward it faster by giving it a higher prior, and the probability-distribution-over-averages would get there slower, but still assigns a substantial prior to theories close to the true one. • On the other hand, if a combination is a better model, either because the true process is a superposition, or we are modelling something outside of our model-space, then a combination will be better able to express it. So mutex assumption will be forced to put all weight on a bad nearby theory, effectively updating in the wrong direction, whereas the combination won't lose as much because it contains more accurate models. I wonder if averaging combination will beat mutex assumption at every step? Also interesting to note that the mutex assumption is a subset of the model space of the combination assumption, so if you are unsure which is correct, you can just add more weight to the mutex models in the combination prior and use that. Now I'll read the paper. Let's see how I did. comment by [deleted] · 2013-07-14T17:16:52.598Z · score: 9 (9 votes) · LW(p) · GW(p) Yup. Exactly what I thought. when the Data Generating Model (DGM) is not one of the component models in the ensemble, BMA tends to converge to the model closest to the DGM rather than to the combination closest to the DGM [9]. He also empirically noted that, in the cases he studied, when the DGM is not one of the component models of an ensemble, there usually existed a combination of models that could more closely replicate the behavior of the DMG than could any individual model on their own. Versus my if a combination is a better model, either because the true process is a superposition, or we are modelling something outside of our model-space, then a combination will be better able to express it. So mutex assumption will be forced to put all weight on a bad nearby theory, comment by fractalman · 2013-07-14T20:45:19.081Z · score: 1 (1 votes) · LW(p) · GW(p) "What does more than one model being correct mean?" maybe something like string theory? The 5 lesser theories look totally different...and then turn out to tranform into one another when you fiddle with the coupling constant. comment by [deleted] · 2013-07-14T22:56:21.335Z · score: 2 (2 votes) · LW(p) · GW(p) Seeing the words “string” and “fiddle” on top of each other primed me to think of their literal meanings, which I wouldn't otherwise consciously thought of. comment by CronoDAS · 2013-07-15T07:41:25.326Z · score: 2 (2 votes) · LW(p) · GW(p) "Bayesian model averaging struggles in practice because it accounts for uncertainty about which model is correct but still operates under the assumption that only one of them is." Perhaps they should say "the assumption that exactly one model is perfectly correct"? comment by elharo · 2013-07-16T11:00:19.811Z · score: 2 (2 votes) · LW(p) · GW(p) Note that, after several decades of past success, the serial speed formulation of Moore's Law did in fact break down in 2004 for the reasons described (Fuller & Millett 2011). That sounds like hindsight bias. Was this breakdown predicted in advance? Furthermore was it predicted in advance and accepted as such by the community? If a few experts predicted this and a few others predicted it wouldn't happen, I wouldn't classify that as a success for the inside view; unless: A) pretty much all experts who took the inside view predicted this B) the experts had not been predicting the same thing in the past and been proven wrong Personally I watched this happen, but I don't think there was any consensus or expectation in the broader community of software developers that the serial speed formulation of Moore's Law was going to fail until it did start to fail. Perhaps people who worked on the actual silicon hardware had different expectations? comment by [deleted] · 2013-07-27T18:19:34.286Z · score: 4 (4 votes) · LW(p) · GW(p) I lived through this transition too, as a software developer. At least in my circles it was obvious that serial processing speeds would hit a wall, that wall would be somewhere in the single-digit Ghz, and from then on scaling would be added concurrency. The reason is very simple to explain: we were approaching physical limits at human scale. A typical chip today is ~2Ghz. The speed of light is a fundamental limiter in electronics, and in one cycle of a 2Ghz chip light moves only 15cm. Ideally that's still one order of magnitude of breathing room, but in reality there are unavoidable complicating factors: electricity doesn't move across gates at the speed of light, circuits are not straight lines, etc. If you want a chip that measures on the order of 1-2 cm in size, then you are fundamentally limited to device operations in the single-digit Ghz range. Singularity technology like molecular nanotechnology promises faster computation through smaller devices. A molecular computer the size of a present-day computer chip would offer tremendous serial speedups for core sizes a micrometer or smaller in size, but device-wide operations would still be limited to ~2Ghz. Getting around that would require getting around Einstein. comment by lukeprog · 2013-07-27T16:04:38.994Z · score: 4 (4 votes) · LW(p) · GW(p) The ITRS reports have long been the standard forecasts in the field, kind of like the IPCC reports but for semiconductors. The earliest one available online (2000) warned about (e.g.) serial speed slowdown due to problems with Dennard scaling (identified in 1974), though it's a complicated process to think through what lessons should be drawn from the early ITRS reports. I'm having a remote researcher look through this for another project, and will try to remember to report back here when that analysis is done. comment by SebNickel · 2013-09-04T09:36:14.128Z · score: 6 (6 votes) · LW(p) · GW(p) I'm having a remote researcher look through this for another project, and will try to remember to report back here when that analysis is done. That project has been on hold for close to a month, the reason being that we wanted to focus on other things until we get hold of the 1994, 1997 and 1999 editions of the ITRS roadmap report that it should be possible to order through the ITRS website. However, we never heard back from whoever monitors the email address that you're supposed to send the order form to, nor were we able to reach anyone else willing to sell us the documents... I am happy to report on what I have found so far, though. Main findings: • What I understand to be the main bottlenecks encountered in performance scaling (three different kinds of leakage current that become less manageable as transistors get smaller) were anticipated well in advance of actually becoming critical, and the time frames that were given for this were quite accurate. • The ITRS reports flagged these issues as being part of the "Red Brick Wall", a name given to a collection of known challenges to future device scaling that had "no known manufacturable solutions". It was well understood, then, that some aspects of device and performance scaling were in danger of hitting a wall somewhere in 2003-2005. While the ITRS reports and other sources warned that this might happen, I have seen no examples of anybody predicting that it would. • The 2001-2005 reports contain projected values for on-chip frequency and power supply voltage ($V\_\{dd\}$) that were, with the benefit of hindsight, highly overoptimistic, and those same tables became dramatically more pessimistic in the 2007 edition. It must be noted, however, that the ITRS reports state that such projected values are meant as targets rather than as predictions. I am not sure to what extent this can be taken as a defence of these overly optimistic projections. • An explanation given in the 2007 edition for the pessimistic corrections made to the on-chip frequency forecasts gives the impression that earlier editions made a puzzling oversight. I may well be missing something here, as this point seems very surprising. • Such aspects as stated in the 2 previous points have left me feeling fairly puzzled about how accurately the ITRS reports can be said to have anticipated the "2004 breakdown". I had just started contacting industry insiders when Luke instructed me to pause the project. While the replies I received confirmed that my understanding of the technical issues was on the right track, none have given any clear answer to my requests to help me make sense of these puzzling aspects of the ITRS reports. The reasons for the breakdown as I currently understand them Three types of leakage current have become serious issues in transistor scaling. They are subthreshold leakage, gate oxide leakage and junction leakage. All of them create serious challenges to further reducing the size of transistors. They also render further frequency scaling at historical rates impracticable. One question I have not yet been able to answer is to what extent subthreshold leakage may play a more important role than the two other kinds as far as limits to performance scaling are concerned. Here is an image of a MOSFET, a Metal-Oxide-Semiconductor Field-Effect Transistor, the kind of transistor used in microprocessors since 1970. (The image is from Cambridge University.) The way it's supposed to work is: When the transistor is off, no current flows. When it is on, current flows from the "Source" (the white region marked "n+" on the left) to the "Drain" (the white region marked "n+" on the right), along a thin layer underneath the "Oxide Layer" called the "Channel" or "Inversion Layer". No other current is supposed to flow within the transistor. The way the current is allowed to pass through is by applying a positive voltage to the gate electrode, which creates an electric field across the oxide layer. This field repels holes (positive charges) and attracts electrons in the region marked "p-type substrate", with the result of forming an electron-conducting channel between the source and the drain. Ideally, the current through this channel is supposed to start flowing precisely when the voltage on the gate electrode reaches the value $V\_\{th\}$, for threshold voltage. (Note that the kind of MOS transistor shown above is an "nMOS"; current integrated circuits combine nMOS with "pMOS" transistors, which are essentially the complement of nMOS in terms of positive and negative charges/p-type and n-type doping. This technology combining nMOS and pMOS transistors is known as "CMOS", where the C stands for Complementary.) In reality, current leaks. Substhreshold leakage flows from the source to the drain when the voltage applied to the gate electrode is lower than the threshold voltage, i.e. when the transistor is supposed to be off. Gate oxide leakage flows from the gate electrode into the body (through the oxide layer). Junction leakage flows from the source and from the drain into the body (through the source-body and the drain-body junctions). Gate oxide leakage and junction leakage are (mainly? entirely?) a matter of charges tunnelling through the ever thinner oxide layer or junction, respectively. Subthreshold leakage does not depend on quantum effects and has been appreciable for much longer than the other two types of leakage, although it has only started to become unmanageable around 2004. It can be thought of as an issue of electrons spilling over the energy barrier formed by the (lightly doped) body-substrate between the (highly doped) source and drain regions. The higher the temperature of the device, the higher the energy distribution of the electrons; so even if the energy barrier is higher than the average energy of the electrons, those electrons in the upper tail of the distribution will spill over. The height of this energy barrier is closely related to the threshold voltage, and so the amount of leakage current depends heavily on this voltage, increasing by about a factor of 10 each time the threshold voltage drops by another 100 mV. Increasing the threshold voltage thus reduces leakage power, but it also makes the gates slower, because the number of electrons that can flow through the channel is roughly proportional to the difference between supply voltage and threshold voltage. Historically, threshold voltages were so high that it was possible to scale the supply voltage, the threshold voltage, and the channel length together without subthreshold leakage becoming an issue. This concurrent scaling of supply power and linear dimensions is an essential aspect of Dennard scaling, which historically permitted to increase the number and speed of transistors exponentially without increasing overall energy consumption. But eventually the leakage started affecting overall chip power. For this reason, the threshold voltage and hence the supply voltage could no longer be scaled down as previously. Since, however, the power it takes to switch a gate is the product of the switched capacitance and the square of the supply voltage, and the overall power dissipation is the product of this with the clock frequency, something had to give as the supply voltage no longer scaled as previously. This issue now severely limits the potential to further increase the clock frequency. Gate oxide leakage, from what I understand, also has a direct impact on transistor performance, as the current from source to drain is related to the capacitance of the gate oxide, which is related to its area and inversely related to its thickness. Historically, the reduction in thickness compensated for the reduction in area as the device was scaled down in every generation, allowing to maintain and even improve the performance. As further reductions of the gate oxide thickness have become impossible due to excessive tunnelling, the industry has resorted to using higher-k materials for the gate oxide, i.e. material with a higher dielectric constant $\\kappa$, as an alternative way of increasing the gate oxide capacitance (k and $\\kappa$ are often used interchangeably in this context). However, recent roadmaps state that the success of this approach is already proving insufficient. Overall, I believe it is pertinent to say that all kinds of leakage negatively affect performance by simply reducing the available usable power and by causing excessive heat dissipation, placing higher demands on packaging. Two (gated) papers that describe these leakage issues in more detail (and there is a lot more detail to it) can be found here and here. (cont'd) comment by SebNickel · 2013-09-04T09:42:23.987Z · score: 5 (5 votes) · LW(p) · GW(p) (cont'd from previous comment) Did the industry predict these problems and their consequences? People in the industry were well aware of these limitations, long before they actually became critical. However, whether solutions and workarounds would be found was a matter of much greater uncertainty. Robert Dennard et al's seminal 1974 paper Design of Ion-Implanted MOSFETs with Very Small Physical Dimensions, that described the very favourable scaling properties of MOSFET transistors and gave rise to the term "Dennard scaling", explicitly mentions the scaling limitations posed by subthreshold leakage: One area in which the device characteristics fail to scale is in the subthreshold or weak inversion region of the turn-on characteristic. (…) In order to design devices for operation at room temperature and above, one must accept the fact that the subthreshold behavior does not scale as desired. This nonscaling property of the subthreshold characteristic is of particular concern to miniature dynamic memory circuits which require low source-to-drain leakage currents. This influential 1995 paper by Davari, Dennard and Shahidi presents guidelines for transistor scaling for the years up to 2004. This paper contains a subsection titled "Performance/Power Tradeoff and Nonscalability of the Threshold Voltage", which explains the problems described above in a lot more detail than I have. The paper also mentions tunnelling through the gate oxide layer, concluding on both issues that they would remain relatively unproblematic up until 2004. Subthreshold leakage is textbook material. The main textbook I have consulted is Digital Integrated Circuits by Jan Rabaey, and I have compared some aspects of the 1995 and 2003 editions. Both contained this sentence in the "History of…" chapter: Interestingly enough, power consumption concerns are rapidly becoming dominant in CMOS design as well, and this time there does not seem to be a new technology around the corner to alleviate the problem. Regarding gate oxide leakage, this comparatively very accessible 2007 article from the IEEE spectrum recounts the story of how engineers at Intel and elsewhere have developed transistors that use high-$\\kappa$ dielectrics as a way of maintaining the shrinking gate oxide's capacitance even as its thickness would cease to be reduced to prevent excessive tunnelling. According to this article, work on such solutions began in the mid-1990s, and Intel eventually launched new chips that made use of this technology in 2007. The main impression this article leaves me with is that the problem was very easy to foresee, but that finding out which solutions might work was a matter of extensive tinkering with highly unpredictable results. The leakage issues are all mentioned in 2001 ITRS roadmap, the earliest edition that is available online. One example from the Executive Summary: For low power logic (mainly for portable applications), the main issue is low leakage current, which is absolutely necessary in order to extend battery life. Device performance is then maximized according to the low leakage current requirements. Gate leakage current must be controlled, as well as sub-threshold leakage and junction leakage, including band-to-band tunneling. Preliminary analysis indicates that, balancing the gate leakage control requirements against performance requirements, high $\\kappa$ may be required for the gate dielectric by around the year 2005. From reading the reports, it is hard to make out whether the implications of these issues were correctly understood, and I have had to draw on a lot of other literature to get a better sense of where the industry stood on this. Getting a hold of earlier editions (the 1999 one in particular) and talking to industry insiders might shed a lot more light on the weight that was given to the different issues that were flagged as part of the "Red Brick Wall" I've mentioned above, i.e. as issues that had no known manufacturable solutions (I did not receive an answer to my inquiry about this from the contact person at the ITRS website). The Executive Summary of the 2001 edition states: The 1999 ITRS warned that there was a wide range of solutions needed but unavailable to meet the technology requirements corresponding to 100 nm technology node. The 1999 ITRS edition also reported the presence of a potential “Red Brick Wall” or “100 nm Wall” (as indicated by the red cells in the technology requirements) that, by 2005, could block further scaling as predicted by Moore’s Law. However, technological progress continues to accelerate. In the process of compiling information for 2001 ITRS, it was clarified that this “Red Brick Wall” could be reached as early as 2003. Two accessible articles from 2000 give a clearer impression of how this Red Brick Wall was perceived in the industry at the time. Both particularly emphasise gate oxide leakage. (cont'd) comment by SebNickel · 2013-09-04T09:43:21.950Z · score: 5 (5 votes) · LW(p) · GW(p) (cont'd from previous comment) As I have mentioned at the beginning, the reports up to 2005 contained highly overoptimistic projections for on-chip frequency and supply voltage, which became dramatically more pessimistic in the 2007 edition. The reports clearly state, however, that these numbers are meant as targets and are not necessarily "on the road to sure implementation", especially where it has been highlighted that solutions were needed and not yet known. They can therefore not necessarily serve as a clear indictment of the ITRS' predictive powers, but I remain puzzled by some of their projections and comments on these before 2007. Getting clarification on this from industry insiders was the next thing I had planned for this project before we paused it. Specifically, tables 4c and 4d in the Overall Roadmap Technology Characteristics, found in a subsection of the Executive Summary titled Performance of Packaged Chips, contain on-chip frequency forecasts in MHz, which became dramatically more pessimistic in 2007 than they had been in the previous 3 editions. A footnote in the 2007 edition states: after 2007, the PIDS model fundamental reduction rate of ~ -14.7% for the transistor delay results in an individual transistor frequency performance rate increase of ~17.2% per year growth. In the 2005 roadmap, the trend of the on-chip frequency was also increased at the same rate of the maximum transistor performance through 2022. Although the 17% transistor performance trend target is continued in the PIDS TWG outlook, the Design TWG has revised the long-range on-chip frequency trend to be only about 8% growth rate per year. This is to reflect recent on-chip frequency slowing trends and anticipated speed-power design tradeoffs to manage a maximum 200 watts/chip affordable power management tradeoff. Later editions seem to have reduced the expected scaling factor even further (1.04 in the 2011 edition), but there were also changes made to the metric employed, so I am not sure how to interpret the numbers (though I would expect the scaling factor to be unaffected by those changes). Relatedly, a paragraph in the System Drivers document titled Maximum on-chip (global) clock frequency states that the on-chip clock frequency would not continue scaling at a factor of 2 per generation for several reasons. The 2001 edition states 3 reasons for this, the 2003 and 2005 edition state 4. But only in 2007 was the limitation from maximum allowable power dissipation added to this list of reasons. This strikes me as very puzzling. The paragraph, as it appears in the 2007 edition, is (emphasis added): Maximum on-chip (global) clock frequency—(...) Through the 2000 ITRS, the MPU maximum on-chip clock frequency was modeled to increase by a factor of 2 per generation. Of this, approximately 1.4× was historically realized by device scaling (17%/year improvement in CV/I metric); the other 1.4× was obtained by reduction in number of logic stages in a pipeline stage (e.g., equivalent of 32 fanout-of-4 inverter (FO4 INV) delays13 at 180 nm, going to 24–26 FO4 INV delays at 130 nm). As noted in the 2001 ITRS, there are several reasons why this historical trend could not continue: 1) well-formed clock pulses cannot be generated with period below 6–8 FO4 INV delays; 2) there is increased overhead (diminishing returns) in pipelining (2–3 FO4 INV delays per flip-flop, 1–1.5 FO4 INV delays per pulse-mode latch); 3) thermal envelopes imposed by affordable packaging discourage very deep pipelining, and 4) architectural and circuit innovations increasingly defer the impact of worsening interconnect RCs (relative to devices) rather than contribute directly to frequency improvements. Recent editions of the ITRS flattened the MPU clock period at 12 FO4 INV delays at 90 nm (a plot of historical MPU clock period data is provided online at public.itrs.net), so that clock frequencies advanced only with device performance in the absence of novel circuit and architectural approaches. In 2007, we recognize the additional limitation from maximum allowable power dissipation. Modern MPU platforms have stabilized maximum power dissipation at approximately 120W due to package cost, reliability, and cooling cost issues. With a flat power requirement, the updated MPU clock frequency model starts with 4.7 GHz in 2007 and is projected to increase by a factor of at most 1.25× per technology generation, despite aggressive development and deployment of low-power design techniques. Finally, the Overall Roadmap Technology Characteristics tables 6a and 6b (found in a subsection titled Power Supply and Power Dissipation in the Executive Summary) contains projected values of the supply power ($V\_\{dd\}$) which also became dramatically more pessimistic in the 2007 edition. I have indicated my puzzlement at these points in an email I have sent out to a number of industry insiders, then asking: Do the 3 revisions made to the roadmap in 2007 that I've pointed out reflect a failure of previous editions to predict the "breakdown in the serial speed version of Moore's Law" and the relevant issues that would cause it? Or do they merely reflect the ambitiousness and aggressiveness of the targets that were set before admitting defeat became inevitable? I have received some very kind replies to those emails, but most have focused on the technical reasons for the "breakdown" in Dennard scaling. The only comment I have received on this last question was from Robert Dennard, who sent me a particularly thoughtful email that came with 4 attachments (which mainly provided more technical detail on transistor design, however). At the end of his email, he wrote: I cannot comment on wishful thinking vs hard facts. Predicting the future is difficult. Betting against Moore's Law was often a losing game. Texas Instruments quit way to early. Indeed, which bets it is most rational to make depends on expected payoff ratios as well as on probability estimates. This distinction between targets and mere predictions complicates the question quite a bit. This was an interesting project, it would be great to pick it up again. comment by lukeprog · 2014-05-21T20:49:54.847Z · score: 2 (2 votes) · LW(p) · GW(p) comment by SebNickel · 2013-09-04T09:40:40.988Z · score: 0 (0 votes) · LW(p) · GW(p) (cont'd from previous comment) Did the industry predict these problems and their consequences? People in the industry were well aware of these limitations, long before they actually became critical. However, whether solutions and workarounds would be found was a matter of much greater uncertainty. Robert Dennard et al's seminal 1974 paper Design of Ion-Implanted MOSFETs with Very Small Physical Dimensions, that described the very favourable scaling properties of MOSFET transistors and gave rise to the term "Dennard scaling", explicitly mentions the scaling limitations posed by subthreshold leakage: One area in which the device characteristics fail to scale is in the subthreshold or weak inversion region of the turn-on characteristic. (…) In order to design devices for operation at room temperature and above, one must accept the fact that the subthreshold behavior does not scale as desired. This nonscaling property of the subthreshold characteristic is of particular concern to miniature dynamic memory circuits which require low source-to-drain leakage currents. This influential 1995 paper by Davari, Dennard and Shahidi presents guidelines for transistor scaling for the years up to 2004. This paper contains a subsection titled "Performance/Power Tradeoff and Nonscalability of the Threshold Voltage", which explains the problems described above in a lot more detail than I have. The paper also mentions tunnelling through the gate oxide layer, concluding on both issues that they would remain relatively unproblematic up until 2004. Subthreshold leakage is textbook material. The main textbook I have consulted is Digital Integrated Circuits by Jan Rabaey, and I have compared some aspects of the 1995 and 2003 editions. Both contained this sentence in the "History of…" chapter: Interestingly enough, power consumption concerns are rapidly becoming dominant in CMOS design as well, and this time there does not seem to be a new technology around the corner to alleviate the problem. Regarding gate oxide leakage, this comparatively very accessible 2007 article from the IEEE spectrum recounts the story of how engineers at Intel and elsewhere have developed transistors that use high-$\\kappa$ dielectrics as a way of maintaining the shrinking gate oxide's capacitance even as its thickness would cease to be reduced to prevent excessive tunnelling. According to this article, work on such solutions began in the mid-1990s, and Intel eventually launched new chips that made use of this technology in 2007. The main impression this article leaves me with is that the problem was very easy to foresee, but that finding out which solutions might work was a matter of extensive tinkering with highly unpredictable results. The leakage issues are all mentioned in 2001 ITRS roadmap, the earliest edition that is available online. One example from the Executive Summary: For low power logic (mainly for portable applications), the main issue is low leakage current, which is absolutely necessary in order to extend battery life. Device performance is then maximized according to the low leakage current requirements. Gate leakage current must be controlled, as well as sub-threshold leakage and junction leakage, including band-to-band tunneling. Preliminary analysis indicates that, balancing the gate leakage control requirements against performance requirements, high $\\kappa$ may be required for the gate dielectric by around the year 2005. From reading the reports, it is hard to make out whether the implications of these issues were correctly understood, and I have had to draw on a lot of other literature to get a better sense of where the industry stood on this. Getting a hold of earlier editions (the 1999 one in particular) and talking to industry insiders might shed a lot more light on the weight that was given to the different issues that were flagged as part of the "Red Brick Wall" I've mentioned above, i.e. as issues that had no known manufacturable solutions (I did not receive an answer to my inquiry about this from the contact person at the ITRS website). The Executive Summary of the 2001 edition states: The 1999 ITRS warned that there was a wide range of solutions needed but unavailable to meet the technology requirements corresponding to 100 nm technology node. The 1999 ITRS edition also reported the presence of a potential “Red Brick Wall” or “100 nm Wall” (as indicated by the red cells in the technology requirements) that, by 2005, could block further scaling as predicted by Moore’s Law. However, technological progress continues to accelerate. In the process of compiling information for 2001 ITRS, it was clarified that this “Red Brick Wall” could be reached as early as 2003. Two accessible articles from 2000 give a clearer impression of how this Red Brick Wall was perceived in the industry at the time. Both particularly emphasise gate oxide leakage. (cont'd) comment by RichardKennaway · 2013-07-16T12:35:02.780Z · score: 1 (1 votes) · LW(p) · GW(p) When IBM bottled out of going to 4 GHz CPUs, in 2004, they had a roadmap extending to at least 5. Maybe beyond, I don't remember, but certainly there was general loose talk in the trade rags about 10 or 20 GHz in the foreseeable. comment by teageegeepea · 2013-07-16T22:03:33.695Z · score: 1 (1 votes) · LW(p) · GW(p) 2: An outside view works best when using a reference class with a similar causal structure to the thing you're trying to predict. An inside view works best when a phenomenon's causal structure is well-understood, and when (to your knowledge) there are very few phenomena with a similar causal structure that you can use to predict things about the phenomenon you're investigating. See: The Outside View's Domain. When writing a textbook that's much like other textbooks, you're probably best off predicting the cost and duration of the project by looking at similar textbook-writing projects. When you're predicting the trajectory of the serial speed formulation of Moore's Law, or predicting which spaceship designs will successfully land humans on the moon for the first time, you're probably best off using an (intensely informed) inside view. Is there data/experiments on when each gives better predictions, as with Kahneman's original outside view work? comment by CoffeeStain · 2013-07-15T10:29:46.521Z · score: 1 (1 votes) · LW(p) · GW(p) Doesn't the act of combining many outside views and their reference classes turn you into somebody operating on the inside view? This is to say, what is the difference between this and the type of "inside" reasoning about a phenomenon's causal structure? Is it that inside thinking involves the construction of new models whereas outside thinking involves comparison and combination of existing models? From an machine intelligence perspective, the distinction is meaningless. The construction of new models is the extension of old models, albeit models of arbitrary simplicity. Deductive reasoning is just the generation of some new strings for induction to operate on to generate probabilities. Induction has the final word; that's where the Bayesian network is queried for the result. Logic is the intentional generation of reference classes, a strategy for generating experiments that are likely to quickly converge that probability to 0 or 1. Inside thinking also, analogously in humans, is the generation of new reference classes; after casting the spell called Reason, the phenomenon now belongs to a class of referents that upon doing so produce a particularly distinguishing set of strings in my brain. The existence of these strings, for the outside thinker, is strong evidence about the nature of the phenomenon. And once the strings exist, the outside thinker is required to combine the model that includes them with her existing model. And unbeknownst to the outside thinker, the strategy of seeking new reference classes is inside thinking. comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-17T20:32:21.372Z · score: 0 (0 votes) · LW(p) · GW(p) Doesn't the act of combining many outside views and their reference classes turn you into somebody operating on the inside view? Yes. So does the act of selecting just one outside view as your Personal Favorite, since it gives the right answer. comment by chaosmage · 2013-07-17T21:18:02.372Z · score: 1 (1 votes) · LW(p) · GW(p) I think that's only true if you allow yourself free choice of outside views. If you had a fixed frameworks of which outside views to take into account, i.e. how to select the ten experts you're going to ask for their opinion, and you decide that before you formulate the problem, the composite model you get shouldn't suffer from the arbitrariness of the inside view. Right? comment by Discredited · 2013-09-02T23:05:25.982Z · score: 0 (0 votes) · LW(p) · GW(p) It looks like the different views are the predictions of models learned on different time scales. The outside view has a small learning rate, so it reflects the long time-scale features of your observation history (or your species' history, if you're using evolved intuitions) and habitual control policies. The inside view is forgetful, so it updates its beliefs a lot in response to recent evidence, and is more likely to diverge or oscillate than converge to true beliefs or low cost trajectories, relative to the outside view. This perspective doesn't explain why the inside view would be systemically optimistic. comment by tom_cr · 2013-08-28T04:08:55.572Z · score: 0 (0 votes) · LW(p) · GW(p) I haven't had much explicit interaction with these inside/outside view concepts, and maybe I'm misunderstanding the terminology, but a couple of the examples of outside views given struck me as more like inside views: Yelp reviews and the advice of a friend are calibrated instruments being used to measure the performance of a restaurant, ie to build a model of its internal workings. But then almost immediately, I thought, "hey, even the inside view is an outside view." Every model is an analogy, e.g. an analogy in the sense of this thing A is a bit like thing B, so probably it will behave analogously, or e.g. 5 seconds ago the thing in my pocket was my wallet, so the thing in my pocket is probably still my wallet. It doesn't really matter if the symmetry we exploit in our modeling involves translation through space, translation from one bit of matter to another, or translation through time: strictly speaking, it's still an analogy. I have no strong idea what implications this might have for problem solving. Perhaps there is another way of refining the language that helps. What I'm inclined to identify as the salient features of (what I understand to be) the inside view is that (subject to some super-model) there is a reasonable probability that the chosen model is correct, whereas for the outside view we are fairly certain that the chosen model is not correct, though it may still be useful. This strikes me as usefully linked to the excellent suggestions here regarding weighted application of multiple models. Perhaps the distinction between inside and outside views is a red herring, and we should concentrate instead on working out our confidence in each available model's ability to provide useful predictions, acknowledging that all models are necessarily founded on analogies, with differing degrees of relevance. comment by Kurros · 2014-05-13T04:26:02.747Z · score: 0 (0 votes) · LW(p) · GW(p) Keynes in his "Treatise on probability" talks a lot about analogies in the sense you use it here, particularly in "part 3: induction and analogy". You might find it interesting. comment by [deleted] · 2013-07-30T18:26:19.060Z · score: 0 (0 votes) · LW(p) · GW(p) I only know two outside views that result in projections of time to AI, which are (1) looking at AI as a specific technology like Kurzweil or Nagy, and (2) AI as a civilizational transition like Hanson. Of these, I've recently decided I prefer Hanson's approach because it seems to match the scale of the transition better, and it seems misguided to view machines that beat humans at all capabilities as one specific technology anyway as opposed to a confluence of hardware and various software things (or brain scanning if you're an ems fan). A problem with Kurzweil's approach is that modern computer growth trends only started--according to William Nordhaus anyway--around 1940. So who is to say this thing could not just shut off? Indeed, if you take it as one of the performance curves analyzed by Nagy, then maybe as an unusually long-lasting trend it has a higher chance of stopping than if you naively said, "Moore's Law has lasted 70 years, so there's a 50/50 chance of it lasting 70 more years." Ideally a pretty good way to go, making use of your suggestion of model combination, might be to let Hanson's reasoning about GDP doubling times form 3/4 of your probability distribution, and then somehow splice in 1/4 based on specific trends happening now--although not just hardware. But that last requirement means I in practice don't know how to do this. If anybody thinks they have done better I'd really like to hear about it. comment by Technoguyrob · 2013-07-21T22:40:16.742Z · score: 0 (0 votes) · LW(p) · GW(p) In mathematical terms, the map from problem space to reference classes is a projection and has no canonical choice (you apply the projection by choosing to lose information), whereas the map from causal structures to problem space is an imbedding and has such a choice (and the choice gains information). comment by RobinHanson · 2013-07-18T20:35:21.183Z · score: 0 (0 votes) · LW(p) · GW(p) If the difference is between inference from surface features vs. internal structure, then yes of course in either case unless you have a very strong theory, you will probably do better to combine many weak theories. When looking at surface features, look at many different features, not just one. comment by elharo · 2013-07-16T11:02:54.564Z · score: 0 (0 votes) · LW(p) · GW(p) I suspect the key distinction may the difference between extrapolation and interpolation. The outside view is far and away more reliable for interpolation: i.e. we have many events on both sides of the one we're attempting to model. Extrapolation, where we have data only on one side of the event we're trying to model, runs a much greater risk of failing in the outside view. The risk of failing in the inside view is no less in this case than with interpolation, but in this case the outside view is not as strong a predictor. comment by Dr_Manhattan · 2013-07-16T00:19:22.755Z · score: 0 (0 votes) · LW(p) · GW(p) When writing a textbook that's much like other textbooks, you're probably best off predicting the cost and duration of the project by looking at similar textbook-writing projects. http://www.mckinsey.com/insights/strategy/daniel_kahneman_beware_the_inside_view did I gwern in? comment by lukeprog · 2013-07-18T03:32:46.981Z · score: 2 (2 votes) · LW(p) · GW(p) did I gwern in? What does this mean? comment by Vaniver · 2013-07-18T05:24:01.995Z · score: 1 (1 votes) · LW(p) · GW(p) I interpreted it as asking if he found the original anecdote that inspired that sentence. comment by lukeprog · 2013-07-18T05:42:00.964Z · score: 0 (0 votes) · LW(p) · GW(p) If so, then: "yes." It's been discussed on Less Wrong, too. comment by Dr_Manhattan · 2013-07-18T15:09:21.449Z · score: 0 (0 votes) · LW(p) · GW(p) It was (also) a hint to include a reference to original sources.
2020-04-09 01:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5235961675643921, "perplexity": 1859.3414376218032}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00095.warc.gz"}
http://math.stackexchange.com/questions/4732/solution-of-a-hamilton-jacobi-bellman-hjb-equation
# Solution of a Hamilton-Jacobi-Bellman (HJB) equation I am trying to solve a ODE that arises from a Hamilton-Jacobi-Bellman (HJB) equation. The equation is $$\frac{1}{2}b^2(1-\rho_s^2)\psi''-\frac{1}{2}\left(\frac{\mu-r}{\sigma}\right)^2\frac{(\psi')^2}{\psi''}+[ru+\theta a+b\rho_s(\mu-r)(1-\frac{2}{\sigma})]\psi'=0,$$ where $\mu, r, \sigma, \theta, a, \rho_s, b$ are constant. I want to determine $\psi'(u)$ (so that I get an integral form for $\psi(u)$). I have tried guessing (trial and error method) forms of the solution but didn't get far. I also tried the Legendre transform, but could not get the linear form. These are the method that I have seen being used in with these problems. - Apart from $u$ and $\psi$, the other variables are constant? – Aryabhata Sep 15 '10 at 21:02 Assume $\psi' = f$, then the equation is of the form: $$Af' + B\frac{f^2}{f'} + (cu+d)f = 0$$ By putting $$g = \frac{f'}{f} = (\log f)'$$ we see that $$Ag + \frac{B}{g} + (cu+d) = 0$$ This is a quadratic in $g$ and can be easily solved. We get $$\psi' = f = e^{\int g}$$ Hope that helps. - thanx, you make it to be so simple. – Vaolter Sep 16 '10 at 11:07
2016-05-06 11:36:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671131372451782, "perplexity": 131.78088362332448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861754936.62/warc/CC-MAIN-20160428164234-00024-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/146571/logarithmic-mean/146575
# Logarithmic mean Logarithmic mean of two positive real numbers is well defined in the literature, it has also been extended to more than two arguments in various papers. Is there any notion of logarithmic mean of random variables or functions? Thank you for your help and time. - By the way, I know that logarithmic mean of convex functionals is there in the literature, but I want to know can we define it for random variables or just positive functions ? –  andy Oct 31 '13 at 23:07 You're looking for $e^{E \log X}$. It has all the nice properties you'd like it to. - Could you expand a bit on your answer? –  Suvrit Nov 1 '13 at 1:27 Yes, Omer, I would greatly appreciate if you could kindly elaborate a little more, anyway thank you so much. –  andy Nov 1 '13 at 3:00 For two positive real numbers, this gives the geometric mean, not the logarithmic mean. Have you looked at en.wikipedia.org/wiki/Logarithmic_mean ? –  S. Carnahan Nov 1 '13 at 9:42 Indeed, say $g(t) := e^{E[(1-t)\log X]+E[t\log Y]}$, then we could define a logarithmic mean as $$L(X,Y) := \int_0^1 g(t)dt.$$ Reasoning: The above idea is inspired by noting that the ordinary logarithmic mean between two positive scalars, $x$ and $y$ may be viewed as $L(x,y) = \int_0^1 x^{1-t}y^tdt$, where the integrand is nothing but the (weighted) geometric mean.
2014-07-09 23:32:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347739219665527, "perplexity": 385.90992178796876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776400583.60/warc/CC-MAIN-20140707234000-00068-ip-10-180-212-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/720171/trying-to-prove-shortest-distance-between-two-points
# Trying to prove shortest distance between two points I'm trying to prove that the shortest distance between two points in the Euclidean plane is a straight line: Here is what I've achieved so far; but I've got lost right at the end if anyone could help? Let $\varphi: [a,b] \rightarrow R$ such that $\varphi(a)=\varphi(b)=0$ and consider $L_t = \text{length}(f + t\varphi )$. If $f$ minimises the length then $\dfrac{d}{dt}L_t |_{t=0} = 0$. Let us begin by defining the arc length of a curve : $$\text{Arc length of a curve} = \int_{0}^1 \sqrt{1+ (f'(x))^2} dx$$ Let $L$ be the arc length of the graph of $f$, such that: $$L_t= \int_b^a \sqrt{1+(f'+t\varphi')^2}\,dx$$ Then we evaluate: \begin{align} \dfrac{d}{dt}L_t\bigg|_{t=0} &= \int_a^b \dfrac{2(f'+t\varphi')\varphi'}{2\sqrt{1+(f'+t\varphi')^2}} \bigg|_{t=0} \\ &= \int_a^b \dfrac{f'\varphi'}{\sqrt{1+(f')^2}} \end{align} Using integration by parts: $$= -\int_a^b \varphi \, \dfrac{d}{dx}\bigg(\dfrac{f'}{\sqrt{1+(f')^2}}\bigg)=0 \quad \text{for any} \, \varphi$$ We can now see that: \begin{align} \dfrac{d}{dx}\bigg(\dfrac{f'}{\sqrt{1+(f')^2}}\bigg) &=0 \\ \Rightarrow \dfrac{f'}{\sqrt{1+(f')^2}} &= \text{constant} \\ f' &= \text{constant} \,(\sqrt{1+(f')^2}) \\ (f')^2 &= \text{constant} \, (1+(f')^2) \end{align} From $$\dfrac{f'}{\sqrt{1+(f')^2}} = \text{constant}$$ square both sides and take the reciprocal, to get $$\dfrac{1+(f')^2}{f'^2} = \text{constant}$$ So $$\dfrac{1}{f'^2} = \text{constant}$$ $$f'^2 = \text{constant}$$ $$f' = \text{constant}$$ • So then would it just be that if f' = constant, then f=constant*x + constant(other) ?? – Sarah Jayne Mar 20 '14 at 19:10 • If the slope of $f$ is constant, then the graph of $f$ is a straight line. – TonyK Mar 20 '14 at 19:18 • Great, thank you so much! – Sarah Jayne Mar 20 '14 at 19:20
2019-05-25 09:27:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 560.0741786549735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00235.warc.gz"}
https://www.zora.uzh.ch/id/eprint/121179/
# Probing the Charm Quark Yukawa Coupling in Higgs + Charm Production Brivio, Ilaria; Goertz, Florian; Isidori, Gino (2015). Probing the Charm Quark Yukawa Coupling in Higgs + Charm Production. Physical Review Letters, 115(21):211801. ## Abstract We propose a new method for determining the coupling of the Higgs boson to charm quarks, via Higgs production in association with a charm-tagged jet: $pp \to hc$. As a first estimate, we find that at the LHC with $3000 fb^{-1}$, it should be possible to derive a constraint of order one, relative to the standard model (SM) value of the charm Yukawa coupling. As a by-product of this analysis, we present an estimate of the exclusive $pp \to hD^{(*)}$ electroweak cross section. Within the SM, the latter turns out to be not accessible at the LHC even in the high-luminosity phase. ## Abstract We propose a new method for determining the coupling of the Higgs boson to charm quarks, via Higgs production in association with a charm-tagged jet: $pp \to hc$. As a first estimate, we find that at the LHC with $3000 fb^{-1}$, it should be possible to derive a constraint of order one, relative to the standard model (SM) value of the charm Yukawa coupling. As a by-product of this analysis, we present an estimate of the exclusive $pp \to hD^{(*)}$ electroweak cross section. Within the SM, the latter turns out to be not accessible at the LHC even in the high-luminosity phase. ## Statistics ### Citations Dimensions.ai Metrics 14 citations in Web of Science® 13 citations in Scopus® 10 citations in Microsoft Academic Google Scholar™ ### Downloads 34 downloads since deposited on 17 Feb 2016 14 downloads since 12 months Detailed statistics ## Additional indexing Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2015 17 Feb 2016 14:34 14 Feb 2018 11:06 American Physical Society 0031-9007 Hybrid https://doi.org/10.1103/PhysRevLett.115.211801 http://arxiv.org/abs/1507.02916v1 ## Download Preview Content: Accepted Version Filetype: PDF Size: 467kB View at publisher Preview Content: Published Version Filetype: PDF Size: 207kB Licence:
2018-10-15 09:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1744667887687683, "perplexity": 2022.0619016410844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00077.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHCJ_2014_v29n2_319
CONFORMALLY RECURRENT SPACE-TIMES ADMITTING A PROPER CONFORMAL VECTOR FIELD Title & Authors CONFORMALLY RECURRENT SPACE-TIMES ADMITTING A PROPER CONFORMAL VECTOR FIELD De, Uday Chand; Mantica, Carlo Alberto; Abstract In this paper we study the properties of conformally recurrent pseudo Riemannian manifolds admitting a proper conformal vector field with respect to the scalar field $\small{{\sigma}}$, focusing particularly on the 4-dimensional Lorentzian case. Some general properties already proven by one of the present authors for pseudo conformally symmetric manifolds endowed with a conformal vector field are proven also in the case, and some new others are stated. Moreover interesting results are pointed out; for example, it is proven that the Ricci tensor under certain conditions is Weyl compatible: this notion was recently introduced and investigated by one of the present authors. Further we study conformally recurrent 4-dimensional Lorentzian manifolds (space-times) admitting a conformal vector field: it is proven that the covector $\small{{\sigma}_j}$ is null and unique up to scaling; moreover it is shown that the same vector is an eigenvector of the Ricci tensor. Finally, it is stated that such space-time is of Petrov type N with respect to $\small{{\sigma}_j}$. Keywords conformally recurrent space-times;proper conformal vector fields;pseudo-Riemannian manifolds;Weyl compatible tensors;Petrov types;Lorentzian metrics; Language English Cited by References 1. T. Adati and T. Miyazawa, On Riemannian space with recurrent conformal curvature, Tensor (N.S.) 18 (1967), 348-354. 2. A. Avez, Formule de Gauss-Bonnet-Chern en metrique de signature quelconque, C. R. Acad. Sci. Paris 255 (1962), 2049-2051. 3. L. Bel, Radiation states and the problem of energy in general relativity, Gen. Relativity Gravitation 32 (2000), no. 10, 2047-2078. 4. B. Barua and U. C. De, Proper conformal collineation in conformally recurrent spaces, Bull. Cal. Math. Soc. 91 (1999), 333-336. 5. U. C. De and H. A. Biswas, On pseudo conformally symmetric manifolds, Bull. Calcutta Math. Soc. 85 (1993), no. 5, 479-486. 6. U. C. De and B. K. Mazumder, Some remarks on proper conformal motions in pseudo conformally symmetric spaces, Tensor (N.S.) 60 (1998), no. 1, 48-51. 7. R. Debever, Tenseur de super-energie, tenseur de Riemann.cas singuliers, C. R. Acad. Sci. (Paris) 249 (1959), 1744-1746. 8. F. Defever and R. Deszcz, On semi Riemannian manifolds satisfying the condition R ${\cdot}$ R = Q(S,R) in geometry and topology of submanifolds. III, World Scientific Publ. Singapore (1991), 108-130. 9. F. De Felice and C. J. S. Clarke, Relativity on Curved Manifolds, Cambridge University Press, 1990. 10. A. Derdzinski and W. Roter, On compact manifolds admitting indefinite metrics with parallel Weyl tensor, J. Geom. Phys. 58 (2008), no. 9, 1137-1147. 11. A. Derdzinski and C. L. Shen, Codazzi tensor fields, curvature and Pontryagin forms, Proc. London Math. Soc. 47 (1983), no. 1, 15-26. 12. G. S. Hall, Symmetries and Curvature Structure in General Relativity, World Scientific Singapore, 2004. 13. F. Hirzebruch, New Topological Methods in Algebraic Topology, Springer, 1966. 14. V. R. Kaigorodov, The curvature structure of spacetimes, Problems of Geometry 14 (1983), 177-204. 15. Q. Khan, On Recurrent Riemannian Manifolds, Kyungpook Math. J. 44 (2004), no. 2, 269-276. 16. D. Lovelock and H. Rund, Tensors, differential forms and variational principles, reprint Dover ed 1988. 17. C. A. Mantica and L. G. Molinari, A second order identity for the Riemannian tensor and applications, Colloq. Math. 122 (2011), no. 1, 69-82. 18. C. A. Mantica and L. G. Molinari, Extended Derdzinski-Shen theorem for curvature tensors, Colloq. Math. 128 (2012), no. 1, 1-6. 19. C. A. Mantica and L. G. Molinari, Riemann compatible tensors, Colloq. Math. 128 (2012), no. 2, 197-210. 20. C. A. Mantica and L. G. Molinari, Weyl compatible tensors, arXiv: 1212.1273 [math-ph], 21 Jan. 2013. 21. C. A. Mantica and L. G. Molinari, Conformally quasi recurrent pseudo-Riemannian manifolds, arXiv: 1305.5060 vl [math. D. G.], 22 May 2013. 22. R. G. McLenaghan and J. Leroy, Complex recurrent space-times, Proc. Roy. Soc. London Ser. A 327 (1972), 229-249. 23. R. G. McLenaghan and H. A. Thompson, Second order recurrent space-times in general relativity, Lett. Nuovo Cimento 5 (1972), no. 7, 563-564. 24. M. Nakahara, Geometry, Topology and Physics, Second Edition, Taylor & Francis, New York, 2003. 25. A. Z. Petrov, The classification of spaces defining gravitational field, Gen. Relativity Gravitation 32 (2000), no. 8, 1665-1685. 26. M. M. Postnikov, Geometry VI: Riemannian geometry, Encyclopaedia of Mathematical Sciences, Vol. 91, Springer, 2001. 27. R. Sachs, Gravitational waves in general relativity. VI. The outgoing radiation condition, Proc. Roy. Soc. Ser. A 264 (1961), 309-338. 28. R. Sharma, Proper conformal symmetries of conformal symmetric spaces, J. Math. Phys. 29 (1988), no. 11, 2421-2422. 29. R. Sharma, Proper conformal symmetries of space-times with divergence-free Weyl conformal tensor, J. Math. Phys. 34 (1988), no. 8, 3582-3587. 30. H. Sthepani, General Relativity, Cambridge University Press, 2004. 31. H. Sthepani, D. Kramer, M. MacCallum, C. Hoenselaers, and E. Hertl, Exact solutions of Einstein's Field Equations, Cambridge University Press, 2003. 32. Y. J. Suh and J.-H. Kwon, Conformally recurrent semi-Riemannian manifolds, Rocky Mountain J. Math. 35 (2005), no. 1, 285-307. 33. A. G. Walker, On Ruse's spaces of recurrent curvature, Proc. London Math. Soc. 52 (1950), 36-64. 34. K. Yano, The Theory of Lie Derivatives and Its applications, Interscience, New York, 1957.
2017-06-23 16:02:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034466505050659, "perplexity": 2512.2733243772127}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00148.warc.gz"}
https://proofwiki.org/wiki/Category:Lowest_Common_Multiple
# Category:Lowest Common Multiple This category contains results about Lowest Common Multiple. For all $a, b \in \Z: a b \ne 0$, there exists a smallest $m \in \Z: m > 0$ such that $a \divides m$ and $b \divides m$. This $m$ is called the lowest common multiple of $a$ and $b$, and denoted $\lcm \set {a, b}$. ## Subcategories This category has the following 7 subcategories, out of 7 total. ## Pages in category "Lowest Common Multiple" The following 21 pages are in this category, out of 21 total.
2021-04-13 17:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7043532133102417, "perplexity": 830.9069231710879}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00457.warc.gz"}
https://nanoscale.blogspot.com/2021/10/
## Sunday, October 24, 2021 ### The physics of ornithopters One thing that the new Dune film captures extremely well is the idea that the primary small-capacity air transportation mode on Arrakis is travel by ornithopter.  The choice of flapping wings as a lift/propulsion mechanism can be in-fictional-universe justified by the idea that jet turbines probably won't do well in an atmosphere with lots of suspended dust and sand, especially on take-off and landing.  Still, I think Frank Herbert decided on ornithopters because it just sounded cool. The actual physics and engineering of flight via flapping wings is complicated.  This site is a good place to do some reading.  The basic idea is not hard to explain.  To get net lift, in the cyclical flapping motion of a wing, somehow the drag force pushing downward on the wing during the upstroke has to be more than balanced by the flux of momentum pushed downward on the wing's downstroke.  To do this, the wing's geometry can't be unchanging during the flapping.  The asymmetry between up and down strokes is achieved through the tilting (at the wing base and along the wing) and flexing of the wing during the flapping motion. The ornithopters in the new movie have wings on the order of 10 m long, and wing motions that look like those of a dragonfly, and the wings are able to flap up and down and an apparent frequency of a couple of hundred hertz (!).  If you try to run some numbers on the torque, power, and material strength/weight that would be required to do this, you can see pretty quickly why this has not worked too well yet as a strategy on earth.   (As batteries, motor technology, and light materials continue to improve, perhaps ornithopters will become more than a fun hobby.) This issue - that cool gadgets in sci-fi or superhero movies would need apparently unachievable power densities at low masses - is common (see, e.g., Tony Stark's 3 GW arc reactor that fits in your hand, weighs a few pounds, and somehow doesn't have to radiate GW of waste heat), and that's ok; the stories are not meant to be too realistic. Still, the ornithopter fulfills its most important purpose in the movie:  It looks awesome. ## Sunday, October 17, 2021 ### Brief items - Sarachik, Feynman, NSF postdocs and more Here are several items of interest: • I was saddened to learn of the passing of Myriam Sarachik, a great experimental physicist and a generally impressive person.  I was thinking about writing a longer piece about her, but this New York Times profile from last year is better than anything I could do.  This obituary retells the story to some degree. (I know that it's pay-walled, but I can't find a link to a free version.)  In the early 1960s, after fighting appalling sexism to get a doctorate and a position at Bell Labs, she did foundational experimental work looking at the effect of dilute magnetic impurities in the conduction of nonmagnetic metals.  For each impurity, the magnetic atom has an unpaired electron in a localized orbitals.  A conduction electron of opposite spin could form a singlet to fill that orbital, but the on-site Coulomb repulsion of the electron already there makes that energetically forbidden except as a virtual intermediate state for a scattering process.  The result is that scattering by magnetic impurities gets enhanced as $T$ falls, leading to an upturn in the resistivity $\rho(T)$ that is logarithmic in $T$ at low temperatures.  Eventually the localized electron is entangled with the conduction electrons to form a singlet, and the resistivity saturates.  This is known as the Kondo Effect based on the theoretical explanation of the problem, but Sarachik's name could credibly have been attached.  Her family met with a personal tragedy from which it took years to recover.  Later in her career, she did great work looking at localization and the metal-insulator transition in doped semiconductors.  She also worked on the quantum tunneling of magnetization in so-called single-molecule magnets, and was a key player in the study of the 2D metal-insulator transition in silicon MOSFETs.  I was fortunate enough to meet her when she came through Rice in about 2003, and she was very generous in her time meeting with me when I was a young assistant professor.  Sarachik also had a great service career, serving as APS President around that time.  Heck of a career! • The audio recordings of the famous Feynman Lectures on Physics are now available for free to stream from Caltech.  You can also get to these from the individual lectures by a link on the side of each page. • There is a new NSF postdoctoral fellowship program for math and physical sciences.  I would be happy to talk to anyone who might be interested in pursuing one of these who might want to work with me.  Please reach out via email. • I've written before about the "tunneling time" problem - how long does quantum mechanical tunneling of a particle through a barrier take?  Here is an experimental verification of one of the most counterintuitive results in this field:  the farther "below" the barrier the particle is (in the sense of having a smaller fraction of the kinetic energy needed classically to overcome the potential barrier), the faster the tunneling.  A key experimental technique here is the use of a "Larmor clock", with the precession of the spin of a tunneling atom acting as the time-keeping mechanism. • Did you know that it is possible, in Microsoft Word, to turn on some simple LaTeX-style symbolic coding?  The key is to enable "Math Autocorrect", and then typing \alpha will automatically be turned into $\alpha$.  (I know act like doing scientific writing in Word is heretical, but not everyone in every discipline is facile with LaTeX/Overleaf.) ## Sunday, October 10, 2021 ### The Purcell effect - still mind-blowing. The Purcell effect is named after E. M. Purcell, a Nobel-winning physicist who also was a tremendous communicator, author of one of the great undergraduate textbooks and a famous lecture about the physical world from the point of view of, e.g., a bacterium.  I've written about this before here, and in a comment I include the complete (otherwise paywalled) text of the remarkable original "paper" that describes the effect. When we calculate things like the Planck black-body spectrum, we use the "density of states" for photons - for a volume $V$, we are able to count up how many electromagnetic modes are available with frequency between $\nu$ and $\nu + \mathrm{d}\nu$, keeping in mind that for each frequency, the electric field can be polarized in two orthogonal directions.  The result is $(8\pi/c^3)\nu^2 \mathrm{d}\nu$ states per unit volume of "free space". In a cavity, though, the situation is different - instead, there is, roughly speaking, one electromagnetic mode per the bandwidth of the cavity per the volume of the cavity.  In other words, the effective density of states for photons in the cavity is different than that in free space.  That has enormous ramifications:  The rates of radiative processes, even those that we like to consider as fundamental, like the rate at which electrically excited atoms radiatively decay to lower states state, can be altered in a cavity.  This is the basis for a lot of quantum optics work, as in cavity quantum electrodynamics.  Similarly, the presence of an altered (from free space) photon density of states also modifies the spectrum of thermal radiation from that cavity away from the Planck black-body spectrum. Consider an excited atom in the middle of such a cavity.  When it is going to emit a photon, how does it "know" that it's in a cavity rather than in free space, especially if the cavity is much larger than an atom?  The answer is, somehow through the electromagnetic couplings to the atoms that make up the cavity.  This is remarkable, at least to me.   (It's rather analogous to how we picture the Casimir effect, where you can think about the same physics either, e.g., as due to altering local vacuum fluctuations of the EM field in the space between conducting plates, or as due to fluctuating dipolar forces because of fluctuating polarizations on the plates.) Any description of a cavity (or plasmonic structure) altering the local photon density of states is therefore really short-hand.  In that approximation, any radiative process in question is tacitly assuming that an emitter or absorber in there is being influenced by the surrounding material.  We just are fortunate that we can lump such complicated, relativistically retarded interactions into an effective photon density of states that differs from that in free space. ## Tuesday, October 05, 2021 ### Spin glasses and the Nobel The Nobel Prize in physics this year was a bit of a surprise, at least to me.  As one friend described it, it's a bit of a Frankenprize, stitched together out of rather disparate components.  (Apologies for the slow post - work was very busy today.)  As always, it's interesting to read the more in-depth scientific background of the prize.  I was unfamiliar with the climate modeling of Manabe and Hasselmann, and this was a nice intro. The other prize recipient was Giorgio Parisi, a statistical mechanician whose key cited contribution was in the theory of spin glasses, but was generalizable to many disordered systems with slow, many-timescale dynamics including things like polymers and neural networks. The key actors in a spin glass are excess spins - local magnetic moments that you can picture as little magnetic dipoles. In a generic spin glass, there is both disorder (as shown in the upper panel of the cartoon, spins - in this case iron atoms doped into copper - are at random locations, and that leads to a broad distribution of spin-spin interactions in magnitude and sign) and frustration (interactions such that flipping spin A to lower its interaction energy with spin B ends up raising the interaction energy with spin C, so that there is no simple configuration of spins that gives a global minimum of the interaction energy).  One consequence of this is a very complicated energy landscape, as shown in the lower panel of the cartoon.  There can be a very large number of configurations that all have about the same total energy, and flipping between these configurations can require a lot of energy such that it is suppressed at low temperatures.  These magnetic systems then end up having slow, "glassy" dynamics with long, non-exponential relaxations, in the same way that structural glasses (e.g., SiO2 glass) can get hung up in geometric configurations that are not the global energetic minimum (crystalline quartz, in the SiO2 case). The standard tools of statistical physics are difficult to apply to the glassy situation.  A key assumption of equilibrium thermodynamics is that, for a given total energy, a system is equally likely to be found in any microscopic configuration that has that total energy.  Being able to cycle through all those configurations is called ergodicity.  In a spin glass at low temperatures, the potential landscape means that the system can get easily hung up in a local energy minimum, becoming non-ergodic. An approach that Parisi took to this problem involved "replicas", where one considers the whole system as an ensemble of replica systems, and a key measure of what's going on is the similarity of configurations between the replicas.  Parisi himself summarizes this in this pretty readable (for physicists) article.  One of Parisi's big contributions was showing that the Ising spin glass model of Sherrington and Kirkpatrick is exactly solvable. I learned about spin glasses as a doctoral student, since the interacting two-level systems in structural glasses at milliKelvin temperatures act a lot like a spin glass (TLS coupled to each other via a dipolar elastic interaction, and sometimes an electric dipolar interaction), complete with slow relaxations, differences between field-cooled and zero-field-cooled properties, etc. Parisi has made contributions across many diverse areas of physics.  Connecting his work to that of the climate modelers is a bit of a stretch thematically - sure, they all worry about dynamics of complex systems, but that's a really broad umbrella.  Still, it's nice to see recognition for the incredibly challenging problem of strongly disordered systems. ## Sunday, October 03, 2021 ### Annual Nobel speculation thread Once more, the annual tradition:  Who do people think will win the Nobel this year in physics or chemistry?  I have repeately and incorrectly suggested Aharonov and Berry .  There is a lot of speculation on social media about AspectZeilinger, and Clauser for Bell's inequality tests.  Social media speculation has included quantum cascade lasers as well as photonic bandgap/metamaterials. Other suggestions I've seen online have included superconducting qubits (with various combinations of people) and twisted bilayer graphene, though both of those may be a bit early.
2022-12-04 09:26:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.508159339427948, "perplexity": 1157.8810683658846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00078.warc.gz"}
https://www.math.tamu.edu/Calendar/listday/index.php?print=5591&cal=38
## Noncommutative Geometry Seminar Date: November 22, 2019 Time: 3:00PM - 4:00PM Location: BLOC 624 Speaker: Ilya Kachkovskiy, Michigan State University Title: Almost commuting matrices Abstract: Suppose that $X$ and $Y$ are two self-adjoint matrices with the commutator $[X,Y]$ of small operator norm. One would expect that $X$ and $Y$ are close to a pair of commuting matrices. Can one provide a distance estimate which only depends on $\|[X,Y]\|$ and not on the dimension? This question was asked by Paul Halmos in 1976 and answered positively by Huaxin Lin in 1993 by indirect C*-algebraic methods, which did not provide any explicit bounds. It was conjectured by Davidson and Szarek that the distance estimate would be of the form $C\|[X,Y]\|^{1/2}$. In the talk, I will explain some background on this and related problems, and the main ideas of the proof of this conjecture, obtained jointly with Yuri Safarov. If time permits, I will discuss some current work in progress. .
2020-01-28 15:03:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5110520124435425, "perplexity": 468.76638496096007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00144.warc.gz"}
https://domymatlab.com/matlab-programming-for-numerical-analysis-pdf-2/
# Matlab Programming For Numerical Analysis Pdf | Pay Someone To Do My Matlab Homework ## Matlab Assignment Help Near Me An Approach For Convergence As A Direct Approach. 5 3. The Theorem 2: _____________________________________________________________________ 1. Theorem 1: _____________________________________________________________________ An algorithm for speedup of computing the convergence rate of the iterative method and the stability graph of the discretization algorithm. 5 4. Relational Equivalences between the Run-Time Iterative Method And Stability Graph Inference For Numerical Analysis MPL Program for Numerical Analysis (DLXIC) — Theorem 3: _____________________________________________________________________ By linearising the variables in advance. 6 5. ## Matlab Coding Homework Help Proof of Theorem 7 For Theorem 2 Corollary 1: _____________________________________________________________________ 2. Corollary 3: _____________________________________________________________________ Theorem 4: _____________________________________________________________________ Theorem 5: _____________________________________________________________________ There are pairs of eigenvalue and eigenvector of the block matrix defined as follows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | link | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2022-10-05 19:08:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4786008596420288, "perplexity": 74.96117254780897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00292.warc.gz"}
http://cvgmt.sns.it/paper/1065/
# The Monge problem for strictly convex norms in $R^d$ created by depascal on 07 May 2008 modified on 22 Sep 2010 [BibTeX] Published Paper Inserted: 7 may 2008 Last Updated: 22 sep 2010 Journal: Journ. of the Eur. Math. Soc. Volume: 12 Number: 6 Pages: 1355-1369 Year: 2010 Notes: The published version is available at: http:/www.ems-ph.orgjournalsshowissue.php?issn=1435-9855&vol=12&iss=6 Abstract: We prove the existence of an optimal transport map for the Monge problem in a convex bounded subset of $\R^d$ under the assumptions that the first marginal is absolutely continuous with respect to the Lebesgue measure and that the cost is given by a strictly convex norm. We propose a new approach which does not use disintegration of measures.
2019-02-17 11:26:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4600525200366974, "perplexity": 700.6355906407014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00420.warc.gz"}
https://www.thestudentroom.co.uk/showthread.php?t=3691755
# t=tanx/2 substitutions for integrals Watch #1 Hi, I'm really stuck on some homework, it involves an integral i must solve by t-substitution. I was on the wikipedia page for this: https://en.wikibooks.org/wiki/Calcul...ent_Half_Angle But i can't for the life of me work out how they substituted the dx in this part: This of course comes from: and I understand therefore why sinx has been substituted but why is dx = 2dt/1+t^2? (shown last on the identity list) Is it just something we're expected to learn and know? 0 #2 Don't worry, I worked it out, for all concerned: t=tan(x/2) dt/dx = (1/2)sec^2(x/2) dt = (1/2)(1+t^2)dx dx = 2dt/(1+t^2) 1 X new posts Back to top Latest My Feed ### Oops, nobody has postedin the last few hours. Why not re-start the conversation? see more ### See more of what you like onThe Student Room You can personalise what you see on TSR. Tell us a little about yourself to get started. ### Poll Join the discussion Yes (254) 34.05% No (492) 65.95%
2020-04-05 23:37:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557284474372864, "perplexity": 4027.1398866906916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00334.warc.gz"}
https://www.nature.com/articles/s41598-021-97616-6
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Analysis of weighted gene co-expression network of triterpenoid-related transcriptome characteristics from different strains of Wolfiporia cocos ## Abstract The fungus Wolfiporia cocos has wide-ranging and important medicinal value, and its dried sclerotia are used as a traditional Chinese medicine. Modern studies have shown that triterpenoid, the active ingredient of W. cocos, have a variety of pharmacological effects. The aim of our research was to determine the key genes related to triterpenoid biosynthesis, which may be useful for the genetic modification of cell-engineered bacteria for triterpenoid biosynthesis. In this study, two monospore strains, DZAC-WP-H-29 (high-yielding) and DZAC-WP-L-123 (low-yielding), were selected from the sexually propagated offspring of strain 5.78 of W. cocos, and the mycelia were cultured for 17, 34, and 51 days, respectively. Weighted gene co-expression network analysis (WGCNA) method was used to analyze transcriptional expressions. The results show that eight core genes (ACAT1-b, hgsA, mvd1, SQLE, erg6, TAT, erg26, and erg11) are associated with the triterpenoid synthesis pathway, and Pm20d2 and norA outside the pathway may be important genes that influence the biosynthesis and accumulation of W. cocos triterpenoid. The biosynthesis of W. cocos triterpenoid is closely related to the expression of sterol metabolic pathway genes. The role of these genes in triterpenoid synthesis complements our knowledge on the biosynthesis and accumulation of W. cocos triterpenoid, and also provides a reference for the target gene modification of engineered bacteria for the fermentation production of triterpenoid. ## Introduction The dried sclerotia of Wolfiporia cocos (Schwein.) Ryvarden & Gilb are used as a traditional Chinese medicine. W. cocos is mild in nature, sweet, and light in function. W. cocos can be used in the treatment of disease caused by the meridian of the heart, lungs, spleen, and kidneys. It has the effects of diuresis, invigorating and tonifying the spleen, tranquilizing the heart, and soothing the spirit. It is used for urinary problems, phlegm, dizziness and palpitations, spleen deficiency, loose stools, restlessness, and insomnia 1. W. cocos is frequently used as a medicine and as a food in Chinese medicine; there is a saying, "nine out of ten prescriptions require W. cocos". Therefore, W. cocos has a wide-ranging and important role in medical practice. The main active components of W. cocos are polysaccharides and triterpenoids 2. Triterpenoid saponins are composed of hydrophobic triterpenoid glycosides and one or more hydrophilic glycosides 3. Triterpene saponins are secondary metabolites of plants and participate in the regulation of plant communication, defense, and sensory functions4. Modern studies have shown that W. cocos triterpenoids have immunoregulatory 5,6, antitumor 7,8, anti-inflammatory 9,10, diuretic 11,12, antioxidant 13,14, hepatoprotective 15, and anticonvulsant effects 16,17, among others. They are used as herbicides and insecticides in agriculture 18,19. Triterpenoid saponins are amphiphilic compounds that can form stable soap-like foams in aqueous solutions and are used in the detergent and cosmetics industries 20. Therefore, triterpenoid saponins play important roles in medicine, agriculture and the chemical industry. Triterpenoid saponins are mainly extracted from plants, which generally take a long time to cultivate and produce a low yield. Compared with plants, large-scale microorganism fermentation has the advantages of fast growth, land saving, and high cost-effectiveness. Microbial production of triterpenoid saponins is considered as a promising alternative to traditional supply methods. Although triterpenoid saponins have been synthesized successfully in microbial hosts21,22, there are still many problems in increasing yields. The biosynthesis pathway of most triterpenoid saponins is not clear. Some key enzymes in plants are difficult to express in microbial hosts. Metabolic flux through the heterogeneous pathway is generally low. Some triterpene saponins are toxic to microbial cells 23. W. cocos is a fungus that synthesizes triterpenoid by itself. The content of triterpenoid in its hyphae is much higher than that in its sclerotia. W. cocos is a natural cell factory to produce triterpenoid saponins with natural resistance to the toxicity of triterpenoid on cells. Weighted gene co-expression network analysis (WGCNA) is a method that analyzes the expression patterns of multiple sample genes, clusters the expression patterns of similar genes, and analyzes the correlation between a module and a specific trait or phenotype. Therefore, WGCNA is widely used in the study of diseases and other traits for genetic correlation analysis. The WGCNA algorithm 24 first assumes that the gene network obeys a scale-free distribution, defines the correlation matrix of gene expression and the adjacency function of gene network formation, calculates the otherness coefficient of different nodes, and then constructs a hierarchical clustering tree on the basis of the calculation results. Different branches of the eigengene dendrogram represent different gene modules; the degree of gene co-expression within the same module is high, but the degree of gene co-expression between different modules is low. Finally, the correlation between each module and a specific phenotype or disease is explored to identify the target genes for disease treatment and gene networks. Both WGCNA and Short Time-Series Expression Miner (STEM) 25 are gene co-expression analysis methods. Compared with STEM analysis, WGCNA has the following advantages. (1) In terms of clustering method, it uses a weighted gene co-expression strategy (no scale distribution), which is more consistent with biological phenomena. (2) The interaction relationship between genes can be presented, and the hub genes at the center of the co-expression network can be found. (3) It is suitable for large sample sizes, and the more samples the better. By contrast, if a STEM analysis is carried out for more than 5 points, the results will be very complicated and the accuracy will be reduced. STEM analysis can only be carried out for 8 points at most. (4) Correlation with phenotype is possible; correlation analysis between module characteristic values, hub genes, and specific traits and phenotypes can be carried out to analyze biological problems more accurately. In order to reveal the key genes and regulatory factors related to triterpenoid biosynthesis in W. cocos mycelia, this study selected two strains with significantly different triterpenoid contents as materials and performed hypha transcriptome analysis at three different culture times. WGCNA was used for comprehensive analysis, thereby laying a theoretical foundation for improving the triterpenoid biosynthesis yield of W. cocos. ## Materials and methods ### Biomaterials and culture methods Both the high-yielding (DZAC-Wp-H-29) and low-yielding (DZAC-Wp-L-123) triterpenoid strains were derived from the sexually reproduced progeny strain 5.78 of W. cocos (purchased from the Institute of Microbiology, Chinese Academy of Sciences, Beijing, China, and stored in a refrigerator at − 80 °C at the Institute of Fungal Resources, Guizhou University). For the W. cocos potato dextrose agar (PDA) medium (no. 17 medium, Institute of Microbiology, Chinese Academy of Sciences), potatoes were washed, peeled, and cut into pieces, and 200 g of potatoes was put into 1000 mL of water, boiled for 30 min, then filtered by gauze. The filtrate was mixed with 1000 mL distilled water with 20 g glucose, 1 g KH2PO4, 0.5 g MgSO4·7H2O, 10 mg VB1, and 18 g agar at natural pH. Mycelia were cultured for 17, 34, and 51 d at 25 °C in the dark, quickly frozen in liquid nitrogen, then, stored in a refrigerator at − 80°C26. ### Colorimetry measurement of total triterpenoid Colorimetric determination of total triterpenoid of W. cocos was modified with reference to Liu et al. 27. First, 0.05 g of dry mycelium powder (60 mesh) was placed in a 2 mL centrifuge tube, and 1.5 mL anhydrous ethanol was added. After ultrasonic extraction for 15 min, followed by centrifugation at 10,000 r/min for 5 min, the supernatant was placed in a 5 mL volumetric flask. Then, 1.5 mL anhydrous ethanol was again added to the centrifuge tube. After ultrasonic treatment for 15 min and centrifugation at 10,000 r/min for 5 min, the supernatant was taken and merged into the 5 mL volumetric flask, and anhydrous ethanol volume was added to the flask. Then, 2 mL of extract was placed in a test tube, volatilized at 50 °C, and cooled. Then, 0.2 mL of 5% vanillin in glacial acetic acid and 1 mL of perchloric acid were added and mixed in. The mixture was bathed in 70 °C water for 20 min, then removed from the water bath and cooled to room temperature, and 5 mL of anhydrous ethanol was added and mixed in. Then, 200 μL of mixed liquor was taken for absorbance measurement at 560 nm for 10–25 min; the reference substance was oleanolic acid26. ### RNA extraction and quantification analysis Because W. cocos hyphae are rich in polysaccharides, total RNA was extracted by total RNA extraction auxiliaries and RNAiso Plus (Takara, China Bao Biological Engineering (Dalian) Co., Ltd. Dalian, China.), DNA pollution was removed by adding RNase-free DNase I, and three biological repeats were carried out. Total RNA was detected on 1% agarose gel and examined by NanoDrop ND2000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). The RNA integrity number (RIN) values (> 8.0) of these samples were evaluated by Agilent 2100 Bioanalyzer (Santa Clara, CA, USA). The purity, concentration, and integrity of total RNA samples were qualified through testing and evaluation, then the samples were prepared for use26. ### Construction and sequencing of cDNA library First, mRNA was isolated from total RNA with Oligo (dT) beads, then broken into short fragments with fragment buffer. Then, short fragments were reverse transcribed into the first-strand cDNA with a random primer, and the second-strand cDNA was synthesized with DNA polymerase I, RNase H, dNTP, and buffer solution. The cDNA fragments were purified with 1.8× Agencourt AMPure XP Beads and end-repaired, and poly (A) was added and ligated to Illumina sequence adapters. The ligation products were size-selected by agarose gel electrophoresis, PCR-amplified, and sequenced using Illumina HiSeqTM 4000 by Gene Denovo Biotechnology Co. (Guangzhou, China)26. ### Sequence assembly and functional annotations De novo transcriptome assembly was carried out with the Trinity short reads assembling program 28. The software parameters were as follows: kmer size = 31, min kmer cov = 12; all other nonimportant parameters were default values. Clean reads were aligned with reference sequences to obtain an alignment rate with Bowtie2 short reads alignment software 29. The software parameters were the default parameters. Basic annotation of unigenes includes protein functional, pathway, Cluster of Orthologous Groups of proteins (COG/KOG) functional and GO (Gene Ontology) annotation. To annotate the unigenes, we used the BLASTx program (http://www.ncbi.nlm.nih.gov/BLAST/) with an E-value threshold of 1 × 10−5, giving priority to the National Center for Biotechnology Information (NCBI) non-redundant protein (Nr) database (http://www.ncbi.nlm.nih.gov), the Swiss-Prot protein database (http://www.expasy.ch/sprot), the KEGG (Kyoto Encyclopedia of Genes and Genomes) database30 (http://www.genome.jp/kegg), the COG/KOG database (http://www.ncbi.nlm.nih.gov/COG) and Plant Transcription Factor Database (http://plntfdb.bio.uni-potsdam.de/v3.0/). Protein functional annotations could be obtained according to the best alignment results. Finally, ESTScan software 31 was used to predict the coding region of unigenes that could not be compared with the above protein libraries, and the nucleic acid sequence (sequence direction 5′ → 3′) and amino acid sequence of the coding region were obtained. GO annotation information of unigenes was analyzed by Blast2GO software according to the Nr annotation information 32, then functional classification of unigenes was performed by WEGO software 33. ### Unigene expression differential analysis Unigene expression was calculated and normalized to RPKM 34. The formula is RPKM = (1,000,000 × C)/(N × L/1000) $${\text{RPKM}} = \left( {{1},000,000 \, \times {\text{ C}}} \right)/\left( {{\text{N }} \times {\text{ L}}/{1}000} \right)$$ where RPKM is the expression of unigene A, C is the number of reads that are uniquely mapped to unigene A, N is the total number of reads that are uniquely mapped to all unigenes, and L is the length (base number) of unigene A. Concordant PE read alignments were used to normalize the calculation. Difference analysis based on edgeR 35 was implemented by the R package. Normalization uses the calcNormFactors function embedded in edgeR. Gene dispersion uses the estimateTagwiseDisp function. Differentially expressed genes (DEGs) were those with false discovery rate (FDR) < 0.05 and |log2FC| ≥ 1. The calculation method of FDR 36 is based on the method of Benjamini and Hochberg. The formula is FDR = p × (m/k), where p is the p-value, m is the number of inspections, and k is the rank of the inspection p-values among all p-values (from small to large). ### RT-qPCR validation RT-qPCR (real-time quantitative polymerase chain reaction) specific primers were designed with Beacon Designer 7.9 (Beijing Biological Technology Co., Ltd. Beijing, China) (Supplementary Table S1). The first strand of cDNA was obtained by reverse transcription with Aidlab’s reverse transcription kit (TUREscript 1st Strand cDNA Synthesis Kit, Aidlab Biotechnologies Co.,Ltd. Beijing, China). RT-qPCR was conducted by using the qTOWER 2.2 PCR System (Jena, Germany) and 2 × SYBR® Green PCR Master Mix (DBI). Each reaction was performed in a total reaction mixture volume of 10 μL containing 1 μL of first-strand cDNA as a template. The amplification program was as follows: 3 min at 95 °C; 40 cycles of 10 s at 95 °C, 30 s at 58 °C, and 45 s at 72 °C; and finally 10 min at 72 °C. All RT-qPCR experiments were repeated three times, with three technical repeats for each experiment. Expression levels of candidate genes were determined using the 2−∆∆Ct method26. Expression levels were normalized against the reference gene pab1 (unigene0013050). ### Weighted gene co-expression network analysis (WGCNA) The R language package was used for analysis 24. Firstly, the low quality data were filtered, then the modules were divided. The Power value was 0.8, the similarity was 0.7, the minimum number of genes in a module was 50, and the rest were default parameters. ### Statistical analysis SPSS Statistics software was used for basic calculations. A single factor ANOVA in comparative mean analysis was used for significance test. Standardized data are obtained through descriptions in descriptive statistical analysis. The principal component and correlation analyses were conducted with the R-language package (R 3.4.3 2017) (http://www.r-project.org/). Select the default parameters to run. Graph Pad Prism7.0 was used for histograms, heat maps, and correlation graph. Both the histogram and the heat map were obtained by setting up the grouped. The correlation graph was obtained by inputting the data into the XY table for correlation and linear regression analysis. Cytoscape3.7.1 was used to map the gene–gene co-expression network. First, input a file containing the weight value between gene and gene, and an attribute file containing symbol, type and the connectivity value of genes in the module. Adobe Illustrator CS6 was used for illustration. ## Results ### Analysis of total triterpenoid contents and genes in high-yielding and low-yielding strains A colorimetric method 27 was used to determine the content of triterpenoid in W. cocos. There were very significant differences between the two strains (Supplementary Figure S1). The results indicated that differences in gene expression at different culture times may lead to differences in the synthesis and final accumulation of triterpenoid secondary metabolites 26. Transcriptome sequencing results (Table 1) and quality evaluation (Supplementary Table S1) showed that the assembly quality of sequencing was good. Real-time quantitative polymerase chain reaction (RT-qPCR) was conducted on 12 randomly selected genes (Supplementary Table S2) with TUBB2 as the internal reference gene. In Supplementary Figure S2, each point represents a value of fold change of expression level at d34 or d51 comparing with that at d17 or d34. Fold-change values were log 10 transformed. The results showed that the gene expression trend was consistent in transcriptome sequencing and RT-qPCR experiments, and the data showed a good correlation (r = 0.530, P < 0.001, Supplementary Figure S2). For each gene, the expression results of RT-qPCR showed a similar trend to the expression data of transcriptome sequencing (Supplementary Figure S3). Furthermore, the transcriptome sequencing data in this study were shown to be reliable. Venn diagrams were created for the DEGs between high-yielding and low-yielding strains with three different culture times, respectively (Fig. 1). In the high-yielding (H) strain and low-yielding (L) strain, respectively, 65 and 98 overlapping DEGs were obtained (Fig. 1a,b), and 698 overlapping DEGs were obtained between H and L strains (Fig. 1c). 698 overlapping DEGs in three different culture times between H and L strains were significantly higher than those in the high-yielding and low-yielding strains, were 10.7 and 7.1 times, respectively. The DEGs between H and L strains cultured for 17 days, 34 days and 51 days were respectively 2035, 3115 and 2681, showing a trend of first increase and then decrease. The Venn diagram results of overlapping genes in the H strains, in the L strains, and between H and L strains showed that there was a large quantity of DEGs, while the number of overlapping genes was very few, at only 3 (Fig. 1d), and the number of overlapping DEGs between H and L strains was only 9. The Venn diagram results showed that the gene expression difference between the two strains was large, which was essentially different from the gene expression difference within strain due to different culture times. Zeng et al. 26 used STEM to focus on genes whose expression trends were opposite in H and L strains with increasing culture time. The research results indicated that the accumulation of triterpenoid was affected by gene expression differences in high-yielding and low-yielding strains. However, according to the above Venn diagram analysis, the DEGs related to triterpenoid biosynthesis were different from those related to triterpenoid accumulation in the two strains that we tested. Therefore, the analysis of Zeng et al. 26 may have omitted the key genes affecting triterpenoid biosynthesis in the two strains. ### Modules related to triterpenoid biosynthesis revealed by WGCNA In order to identify the core genes of the regulatory network related to triterpenoid biosynthesis, we performed WGCNA on 18 samples’ transcriptome data. After data filtering, the Power value was selected as 8 to divide the modules, the similarity degree was selected as 0.7, the minimum number of genes in a module was 50, and 14 modules were finally obtained. The weighted composite value of all gene expression quantities in the module was used as the module characteristic value to draw the heat map of sample expression pattern (Fig. 2). It can be found that the gene expression quantities are significantly different between the high-yielding strain (H) and the low-yielding strain (L) in the three modules of blue, brown, and bisque4. The results of correlation analysis between two modules (Supplementary Figure S4) show that blue and brown, and blue and bisque4 are significantly negatively correlated, with correlation coefficients of − 0.7 and − 0.59, respectively. Brown and bisque4 are weakly correlated, with a correlation coefficient of 0.24. ### GO and KEGG enrichment analysis on blue, brown, and bisque4 GO enrichment analysis was carried out on genes in the three modules of blue, brown, and bisque4, respectively (Supplementary Figure S5). The results showed that genes in these three modules were mainly enriched in catalytic activity and binding in the molecular functions; metabolic processes, cellular processes, and single-organism processes in the biological processes; and cell and cell parts in the cellular component. The three modules had the same GO enrichment results, only the number of genes was different. Furthermore, KEGG enrichment results (Supplementary Table S3) for the three modules were not the same. The brown module (P < 0.05) was mainly enriched in the metabolic pathways of glyceride, sulfur, and galactose; non-homologous end joining; and microbial metabolism in diverse environments (Fig. 3). The blue module (P < 0.05) was mainly enriched in the metabolism and biosynthesis of various amino acids; metabolism of oxycarboxylic acid, and folate; biosynthesis of secondary metabolites, aminoacyl-tRNA, pantothenate, and CoA; microbial metabolism in diverse environments; basal transcription factors, etc. (Fig. 4). The bisque4 module (P < 0.05) was mainly enriched in the cell cycle; meiosis; DNA repair; mismatch repair; nucleotide excision repair; base excision repair; biosynthesis of terpenoid backbones, and unsaturated fatty acids; and fatty acid metabolism (Fig. 5). The KEGG enrichment results of the three modules were significantly different, which was consistent with the results of the module correlation analysis. The genes related to triterpenoid anabolism in each module were selected according to KEGG annotation results of genes, and those genes with the above gene's expression correlation weight value among in the module were top 10 were selected (Supplementary Table S4). Those genes were selected for GO and KEGG enrichment. GO enrichment (Supplementary Figure S6) showed that these selected genes were mainly concentrated in catalytic activity and binding in the molecular functions; metabolic processes, cellular processes, and single-organism processes in the biological processes; and cell and cell parts in the cellular component. The enrichment of these three modules’ genes was still basically the same. Detailed GO information of these three modules’ genes is displayed in Supplementary Tables S57. KEGG enrichment results of genes related to triterpenoid biosynthesis in each module (Supplementary Table S8) showed that the brown module was only enriched in metabolism of amino sugars and nucleotide sugars. The blue module was mainly enriched in the metabolism and biosynthesis of various amino acids; biosynthesis of secondary metabolites; oxocarboxylic acid metabolism; microbial metabolism in diverse environments; and basal transcription factors. The bisque4 module was mainly enriched in biosynthesis of triterpenoid backbones, and unsaturated fatty acids; fatty acid metabolism; the cell cycle; and meiosis. Combined with the results of STEM analysis by Zeng et al. 26, a stable membrane structure may be necessary to maintain a high accumulation of triterpenoid in W. cocos, and the high accumulation capacity of triterpenoid in W. cocos may be related to the synthesis capacity of sterols. Only bisque4 of the three modules was significantly enriched in the biosynthesis of triterpenoid backbones and unsaturated fatty acids. ### Gene–gene correlation analysis for triterpenoid related genes of the three modules Cytoscape was used to map the relationships according to the values of connectivity for the three modules’ genes related to triterpenoid biosynthesis. There are two core genes of sterol-4alpha-carboxylate 3-dehydrogenase (erg26) (unigene0006213) and lanosterol 14-alpha-demethylase (erg11) (unigene0015621) in the brown module (Supplementary Figure S7), which are both genes in the steroid biosynthetic pathway (KEGG annotation) and regulatory factors (PlnTFDB annotation). Erg26 and erg11 are regulated by multiple genes, respectively. Erg26 is regulated by both the regulator GIP (Copia protein) (unigene0004283) and OPT5 (Oligopeptide Transporter 5) (unigene0000595). Erg11 is regulated by Matk (kinase-like protein) (unigene0006800) and betA (oxygen-dependent choline dehydrogenase) (unigene0011761). ERG9 (farnesyl-diphosphate farnesyltransferase) (unigene0013210) has a weak correlation with erg26. FDPS (farnesyl-diphosphate synthase) (unigene0002741) is indirectly related to erg26 and erg11. In addition, the three genes of FACE1 (STE24 endopeptidase) (unigene0000435), PST2 (unigene0001237), and Fntb (unigene0014799) are indirectly related in the module. Except for TAT (tyrosine aminotransferase) (unigene0003146) with moderate connectivity, several other genes related to triterpenoid biosynthesis in the blue module (Supplementary Figure S8) have generally low connectivity. TAT has a direct or indirect relationship with erg11 (unigene0015620), ERG2 (C-8 sterol isomerase) (unigene0004578), COQ2 (4-hydroxybenzoate polyprenyltransferase) (unigene0001642), erg26 (unigene0007103), FTA (protein farnesyltransferase subunit beta) (unigene0010654), ACAT (sterol O-acyltransferase) (unigene0015643), CAO2 (carotenoid oxygenase) (unigene0011352), and COQ2 (unigene0001914), respectively. Erg6 (sterol 24-C-methyltransferase) (unigene0004059) and erg11 (unigene0012490) with low connectivity are associated with several different genes, respectively. TAT is regulated by four regulatory factors and multiple genes. The two regulatory factors norA (aryl-alcohol dehydrogenase) (unigene0005043) and Pm20d2 (peptidase M20 domain-containing protein 2) (unigene0004261) in the module have high connectivity and are indirectly related to TAT. In the bisque4 module (Supplementary Figure S9), except for TAT (unigene0012065), which has very low connectivity, the other 9 genes related to triterpenoid biosynthesis are correlated with each other and interlaced into a complex regulatory network. In particular, erg6 genes (unigene0014738, unigene0014749), SQLE (squalene monooxygenase) (unigene0009035), mvd1 (diphosphomevalonate decarboxylase) (unigene0001911), ACAT1-b (acetyl-CoA C-acetyltransferase) (unigene0014534), and hgsA (hydroxymethylglutaryl-CoA synthase) (unigene0000449) are five genes that have high connectivity and strong interactions, which are simultaneously regulated by regulators and multiple genes. The erg6 (unigene0014738) gene is particularly important and interacts directly and indirectly with the four core genes SQLE, mvd1, ACAT1-b, and hgsA. Through the above correlation analysis of genes related to triterpenoid biosynthesis and metabolism, eight core genes (ACAT1-b, hgsA, mvd1, SQLE, TAT, erg11, erg26, and erg6 genes) of the regulatory network were screened out from the three modules that may be related to triterpene anabolism. ### Screening of key genes in biosynthesis of triterpenoid in W. cocos KEGG was used for mapping the triterpenoid metabolic pathway. According to the co-expression relationship between genes in the above three modules, genes in the pathway are mapped to the metabolic pathway, while other genes are arranged outside the pathway (Fig. 6). Supplementary Figure S10 is standardized heat map of genes in Fig. 6. The eight core genes are located in the upstream and downstream pathways of triterpenoid biosynthesis. With the exception of TAT, the other seven core genes directly or indirectly interact with each other and are simultaneously affected by regulators or multiple protease genes. In the bisque4 module, in the upstream of the biosynthesis of triterpenoid, three genes (ACAT1-b, hgsA, and mvd1) interact with each other and are closely related, and are also affected by the protease genes PEX19-1 and CC1G-02019. ACAT1-b and hgsA are also affected by the protease gene PEX5L. Both hgsA and mvd1 are influenced by the regulator ACLY and the protease gene YHM2. All three enzymes interact directly with erg6. ACAT1-b, the upstream core gene of the pathway, and erg6, the last downstream core gene of the pathway, are also co-acted upon by ADK1 and unknown protein unigene0016030. SQLE, the core gene of the biosynthesis of triterpenoid, interacts directly with erg6 and simultaneously interacts indirectly with erg6 by the regulators YKT6 (snare-like protein), EXO84, and unigene0001269. SQLE is also regulated by the regulator mitochondrial protein (msp1) and multiple protease genes. Network relationships show that the expression of SQLE is affected by many factors, especially the relationship with erg6. Two of the three erg6 sequences are in the center of the network in the bisque4 module, affecting the expression of each core gene across the pathway. Erg6 was co-expressed with several genes, including three regulators YKT6, malA (NADP-dependent malic enzyme), and cytochrome P450 (CYP3A24)) and 13 protein genes (Fig. 6, Supplementary Figure S9). The complexity of the network relationships indicates the complexity of core gene expression regulation. The core genes in the pathway regulate each other to affect their expressions and are also affected by many factors outside the pathway. In the brown module, in the downstream of the biosynthesis of triterpenoid, erg11 and erg26 are jointly affected by the regulatory factor OPT5 and multiple protease genes. Erg11 is also affected by the regulatory factors Matk and betA, Ribosome biogenesis protein (bop1-a), and unknown protein unigene0013533. Erg26 is affected by the regulator GIP, the protease gene aorO, and unigene0001876. In the blue module, on the branches related to the biosynthesis of triterpenoid, the moderate connectivity of TAT is simultaneously affected by multiple regulators and protease genes, which have high connectivity. The network diagram shows that the regulatory pattern of TAT is very complex and many factors affect its expression. It is worth noting that Pm20d2 and norA in the blue module have very high connectivity and are directly or indirectly related to multiple triterpenoid-related genes. They were also screened in the Short Time-series Expression Miner (STEM) analysis results of Zeng et al. 26 and were positively correlated with erg26, ERG2, and TAT; Pm20d2 was negatively correlated with erg11. ## Discussion In this study, the high-yielding DZAC-Wp-H-29 (H) and low-yielding DZAC-Wp-L-123 (L) strains of W. cocos with different total triterpenoid contents were screened from the sexual progeny of the same strain. The selection of materials and culture times avoided any background interference caused by different genetic bases or developmental stages of materials, making the research results more accurate and reliable. The weighted gene co-expression network analysis (WGCNA) method was used for analysis. Among the fourteen gene modules with similar expression patterns, three modules (bisque4, blue, and brown) were selected for further analysis according to the phenotypic differences in the triterpenoid contents of the two strains. The top 10 genes with the highest connectivity values in relation to triterpenoid-related genes in each module were selected, and a network diagram was built according to the gene connectivity relationships in each module. Five core genes (ACAT1-b, hgsA, mvd1, SQLE, and two erg6 genes) in the bisque4 module constituted a complex network of direct and indirect effects, with erg6 having an especially important status. Two core genes in the brown module (erg26 and erg11) and the TAT gene in the blue module were also located in the center of their respective networks. Acetyl-CoA C-acetyltransferase (ACAT1-b) is the first enzyme in the Mevalonate pathway, catalyzing the conversion of acetyl-CoA to acetoacetyl-CoA. Hydroxymethylglutaryl coenzyme A (hgsA) is the following enzyme that catalyzes the conversion of acetoacetyl CoA to hydroxymethylglutaryl CoA, and it is also regulatory factor. These two genes are at the beginning of the upstream pathway of triterpenoid biosynthesis. Their position determines their status; as a result, their expression directly affects the amount of subsequent triterpenoid biosynthesis. Diphosphonate decarboxylase (Mvd1) catalyzes the conversion of 5-diphosphomevalonate to isopentenyl diphosphate. Isopentenyl diphosphate is a precursor to the addition of all isoprene compounds from beginning to end. The ramification of isopentenyl diphosphate is directly related to the biosynthesis of triterpenoid. It can be seen from the metabolic pathway diagram in KEGG, the expression of mvd1 directly affects the amount of biosynthesized triterpenoid. The results of network analysis (Fig. 6) show that ACAT1-b, hgsA, and mvd1 had direct correlations with the erg6 gene of catalytic sterol synthesis at the downstream terminal. It can be seen that the expressions of these three core upstream genes that affect the biosynthesis of triterpenoid and sterols were uniformly regulated by the downstream erg6 gene, indicating that the biosynthesis and accumulation of triterpenoid could be closely related to the biosynthesis of sterols. Squalene monooxygenase (SQLE) catalyzes the conversion of squalene into 2,3-Oxidosqualene, which is the first oxidation step in phytosterol and triterpenoid biosynthesis. Subsequently, 2,3-Oxidosqualene is cycled by oxide squalene cyclase into a multicyclic triterpenoid backbone. These molecules are further oxidized by CYP450s to form triterpenoids. Finally, these triterpenoids are glycosylated by UGTs into triterpenoid saponins 23. In the cyclization of 2,3-Oxidosqualene, inner bonds are introduced into the main chain of 2,3-Oxidosqualene to form polycyclic molecules 37. In the process of cyclization, more than 100 triterpene backbones can be generated due to various possible combinations of inner bonds. However, only a few cyclized products are further oxidized by cyp450 38. In addition, the cycled products usually have different conformations and can produce different triterpenoid saponins 39. SQLE is one of the key enzymes that regulate the biosynthesis of downstream triterpenoids and phytosterols 40. In study of Han et al. 40, two SQLE enzyme genes were cloned from ginseng, among which the SQLE1 gene was interfered with to reduce ginsenoside production, and the upregulation of SQLE2 led to enhanced phytosterol accumulation. SQLE1 regulates the biosynthesis of ginsenoside, but not phytosterol. 2,3-Oxidosqualene is a common precursor of phytosterol and triterpenoid saponins biosynthesis. This indicates that 2,3-Oxidosqualene from the catalysis of different SQLE genes may be converted into different products due to the differences in conformation. In the present study, only one SQLE gene was annotated, which was highly expressed in the low-yielding strain. This result suggests that it may be regulated by multiple levels of post-transcriptional translation or post-translational modification. Correlation analysis showed that SQLE expression was mainly regulated by direct and indirect interactions of erg6, as well as by msp1 and five protease genes. Furthermore, the accumulation of triterpenoid could be closely related to the biosynthesis of sterols. Sterol 24-methyltransferase (erg6) catalyzes the conversion of zymosterol into fecosterol, which is then catalyzed into ergosterol by sterol isomerase. Erg6 is a key step in the second transmethylation of sterol synthesis. More than 10 sequences of erg6 in different plants have been isolated and cloned, which can be divided into two families according to their amino acid sequences 41. At least three erg6 sequences in Arabidopsis thaliana have been cloned and their functions confirmed 42. In the present study, three sequences were annotated to erg6, and they were all highly expressed in the low-yielding strain. Two of these three sequences were directly and closely related to the other four core genes in the bisque4 module, indirectly related to ACAT1-b through two protease genes, and indirectly related to SQLE through the regulatory factor YKT6 and two protease genes. These two erg6 genes were, respectively, affected by the regulatory factors malA and CYP3A24, as well as by multiple protease genes, showing extremely complex regulatory patterns. These results indicate that the biosynthesis of sterols plays an important role in the biosynthesis and accumulation of triterpenoid in W. cocos. In fungi, the 14α-methyl group required for the biosynthesis of sterols is derived from lanosterol. Sterol 14α-demethylase (erg11) is a cytochrome P450 43 that plays an important role in catalyzing the conversion of lanosterol to sterol, and that has been shown 14 -methyl is absent from all known functional sterols 44. Different erg11 genes have different special substrates. The expression of human CYP51 is regulated by hydroxysteroids 45. Erg11 can be used as a target gene to inhibit the growth of fungi 46. It is a key enzyme in sterol synthesis, and the resulting sterol is an important membrane component and a precursor of hormone biosynthesis 47. In the present study, four genes were annotated to erg11 and their expressions were very different. One of them belonged to the brown module and was highly expressed in the low-yielding strain. It was regulated by the regulatory factors OPT5, Matk, and betA, as well as by multiple protease genes. Sterol 4α-carboxylate 3-dehydrogenase (erg26) catalyzes the formation of keto groups at the c-3 position and the removal of carboxylate acids from c-4. It is the key enzyme for the synthesis of sterols. The growth defects of its mutant can be made up not only by exogenous sterol supply, but also by a second mutation of the gene encoding heme biosynthetase, indicating that the accumulation of erg26 intermediate (carboxylic acid sterol) is toxic to the growth of heme active yeast cells 46. Erg26 and erg11 can be used as target genes to inhibit fungal growth. In the present study, the expression of erg26 was regulated by the regulators OPT5 and GIP, as well as by multiple protease genes. Erg26 and erg11 interact indirectly through four protease genes, including regulatory factors OPT5 and bop1-a. They are key enzymes in sterol synthesis, and the resulting sterol is an important membrane component and a precursor of hormone biosynthesis 47. Tyrosine aminotransferase (TAT) is an enzyme that catalyzes the conversion of the aromatic amino acid tyrosine into 4-hydroxyphenylpyruvate. It is affected by four regulatory factors and six protease genes. In the STEM analysis of Zeng et al. 26, three genes (TAT, erg26, and erg11) were indirectly correlated through the regulatory factor Pm20d2. TAT, erg26, and erg11 were all identified as core genes in two different kinds of analysis, indicating that these three genes play an important role in the biosynthesis of triterpenoids and sterols in W. cocos. In addition, in STEM analysis, TAT, erg26, and ERG2 were also indirectly correlated with norA through the action of the protease gene; norA was also indirectly correlated with TAT in the blue module. Pm20d2 and norA are regulatory factors and protease genes outside the triterpenoid synthesis pathway, and they are all closely related to core genes in the two different analysis methods. In summary, the results of the present study show eight core genes related to the synthesis and accumulation of triterpenoid, namely, ACAT1-b, hgsA, mvd1, SQLE, erg6, TAT, erg26, and erg11, as well as multiple regulatory factors and protease genes, such as Pm20d2 and norA, outside the pathway. Among the eight core genes, erg6 in the bisque4 module is at the center of the core genes, and its expression directly affects the expression of four other core genes (ACAT1-b, hgsA, mvd1, and SQLE). In the triterpenoid synthesis-related pathway, SQLE in the bisque4 module, TAT in the blue module, erg26 and erg11 in the brown module, as well as Pm20d2 and norA outside the pathway, are six genes that all have high correlation and connectivity in the two analysis methods. This result shows that they play an important role in the biosynthesis and accumulation of triterpenoid in W. cocos, and they are genes that need to be focused on in follow-up studies. It has been reported 48 that during the development of peas after germination, the production of β-amyrin is very active, and the biosynthesis of sterols increases after several days of germination. Although the significance of this dramatic conversion between sterol and triterpenoid synthesis is unclear, similar changes occur during the development of monocotyledons in sorghum seeds, suggesting that this may be a common phenomenon among different plant species. The results of the present study also showed that the triterpenoid in W. cocos are closely related to the biosynthesis of sterols. ## Conclusion Two new findings were obtained in this study: (1) W. cocos triterpenoid biosynthesis is closely related to eight core genes in the triterpenoid-related metabolic pathways (ACAT1-b, hgsA, mvd1, SQLE, erg6, TAT, erg26, and erg11) as well as multiple regulatory factors, such as Pm20d2 and norA, outside the pathway and protease gene expressions. (2) W. cocos triterpenoid biosynthesis is indeed closely related to the expression of sterol metabolic pathway genes. ## Data availability The datasets generated for this study can be found in the NCBI BioProject PRJNA552734. ## References 1. 1. Chinese pharmacopoeia. National pharmacopoeia commission edn, (China Medicine Science and Technology Press, 2015). 2. 2. Rios, J. L. Chemical constituents and pharmacological properties of Poria cocos. Planta Med. 77, 681–691. https://doi.org/10.1055/s-0030-1270823 (2011). 3. 3. Osbourn, A. Saponins and plant defence—A soap story. Trends Plant Sci. 1, 4–9. https://doi.org/10.1016/s1360-1385(96)80016-1 (1996). 4. 4. Chung, I. M. & Miller, D. A. Natural herbicide potential of alfalfa residue on selected weed species. Agron. J. 87, 920–925 (1995). 5. 5. Xie, J. H., Lin, J., Yu, L. Z. & Lei, L. S. Experimental study of the inhibitory effect of total triterpenoids from Poria cocos on mouse immune response and therapeutic effect on rat adjuvant arthritis. Chin. Med. Pharm. Clin. 32, 89–92 (2016). 6. 6. Deng, Y. Y. et al. Comparative study on effective substances of Poria cocos regulating immune function. Guide China Med. 10, 94–95 (2012). 7. 7. Wen, H. L. et al. The anti-tumor effect of pachymic acid on osteosarcoma cells by inducing PTEN and Caspase 3/7-dependent apoptosis. J. Nat. Med. 72, 57–63. https://doi.org/10.1007/s11418-017-1117-2 (2018). 8. 8. Chu, B. F. et al. An ethanol extract of Poria cocos inhibits the proliferation of non-small cell lung cancer A549 cells via the mitochondria-mediated caspase activation pathway. J. Funct. Food. 23, 614–627. https://doi.org/10.1016/j.jff.2016.03.016 (2016). 9. 9. Pan, Y. F., Yang, X. L., Liu, D. & Zhang, D. D. Active constituents and anti-inflammatory mechanism of Fangji Fuling Decoction. Chin. Tradit. Med. 35, 50–54 (2013). 10. 10. Lee, S. et al. Anti-inflammatory activity of the sclerotia of edible fungus, Poria cocos Wolf and their active lanostane triterpenoids. J. Funct. Food. 32, 27–36. https://doi.org/10.1016/j.jff.2017.02.012 (2017). 11. 11. Zan, J. F., Shen, C. J., Zhang, L. P. & Liu, Y. W. Effect of Poria cocos hydroethanolic extract on treating adriamycin-induced rat model of nephrotic syndrome. Chin. J. Integr. Med. 23, 916–922. https://doi.org/10.1007/s11655-016-2643-6 (2017). 12. 12. Lee, D. et al. Protective effet of lanostane triterpenoids from the sclerotia of Poria cocos Wolf against cisplatin-induced apoptosis in LLC-PK1 cells. Bioorg. Med. Chem. Lett. 27, 2881–2885 (2017). 13. 13. Cheng, S. M., Gui, Y., Shen, S. & Huang, W. Amtioxidant properties of triterpenes from Poria cocos Peel. Food Science 32, 27–30 (2011). 14. 14. Mao, G. N. et al. The hypolipidemic study of total triterpenic compounds from sclerotia of Poria cocos. J. Shanxi Univ. Sci. Technol. 33, 130–134 (2015). 15. 15. Zhang, X. S., Rao, Z. G., Hu, X. M. & Liu, P. Preventive effect of triterpenes from Poria cocos on liver injury in mice. Food Science 33, 270–273 (2012). 16. 16. Zhang, Q. Q. et al. Experimental study on the anticonvulsive effect of Poria cocos triterpenoids. Chin. J. Integr. Med. Cardio Cerebrovasc. Dis. 67, 712–714 (2009). 17. 17. Yu, C. M., Li, J. P. & Hu, X. M. The antiepileptic activity of Poria cocos extract. Chin. Patent Med. 39, 1288–1290 (2017). 18. 18. Scognamiglio, M. et al. Oleanane saponins from Bellis sylvestris Cyr. and evaluation of their phytotoxicity on Aegilops geniculata Roth. Phytochemistry 84, 125–134. https://doi.org/10.1016/j.phytochem.2012.08.006 (2012). 19. 19. Potter, D. A. & Kimmerer, T. W. Inhibition of herbivory on young holly leaves: Evidence for the defensive role of saponins. Oecologia 78, 322–329. https://doi.org/10.1007/bf00379105 (1989). 20. 20. Jia, Z. H., Koike, K. & Nikaido, T. Major triterpenoid saponins from Saponaria officinalis. J. Nat. Prod. 61, 1368–1373. https://doi.org/10.1021/np980167u (1998). 21. 21. Dai, Z. et al. Producing aglycons of ginsenosides in bakers’ yeast. Sci. Rep. https://doi.org/10.1038/srep03698 (2014). 22. 22. Takemura, M., Tanaka, R. & Misawa, N. Pathway engineering for the production of beta-amyrin and cycloartenol in Escherichia coli—A method to biosynthesize plant-derived triterpene skeletons in E-coli. Appl. Microbiol. Biotechnol. 101, 6615–6625. https://doi.org/10.1007/s00253-017-8409-z (2017). 23. 23. Zhao, Y. J. & Li, C. Biosynthesis of plant triterpenoid saponins in microbial cell factories. J. Agric. Food Chem. 66, 12155–12165. https://doi.org/10.1021/acs.jafc.8b04657 (2018). 24. 24. Langfelder, P. & Horvath, S. WGCNA: An R package for weighted correlation network analysis. BMC Bioinform. https://doi.org/10.1186/1471-2105-9-559 (2008). 25. 25. Ernst, J. & Bar-Joseph, Z. STEM: A tool for the analysis of short time series gene expression data. BMC Bioinform. 7, 191–191. https://doi.org/10.1186/1471-2105-7-191 (2006). 26. 26. Zeng, G. P., Li, Z. & Zhao, Z. Comparative analysis of the characteristics of triterpenoid transcriptome from different strains of Wolfiporia cocos. Int. J. Mol. Sci. 20, 3703. https://doi.org/10.3390/ijms20153703 (2019). 27. 27. Liu, C. L., Xie, X. X., Liu, H. G. & Xu, L. Study on the optimal conditions for the determination of effective components of Poria cocos by spectrophotometry. Asia-Pac. Tradit. Med. 10, 17–19 (2014). 28. 28. Grabherr, M. G. et al. Full-length transcriptome assembly from RNA-Seq data without a reference genome. Nat. Biotechnol. 29, 644–652. https://doi.org/10.1038/nbt.1883 (2011). 29. 29. Li, R. Q. et al. SOAP2: an improved ultrafast tool for short read alignment. Bioinformatics 25, 1966–1967. https://doi.org/10.1093/bioinformatics/btp336 (2009). 30. 30. Kanehisa, M. & Goto, S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res 28, 27–30. https://doi.org/10.1093/nar/28.1.27 (2000). 31. 31. Iseli, C., Jongeneel, C. V. & Bucher, P. ESTScan: A program for detecting, evaluating, and reconstructing potential coding regions in EST sequences. In Proceedings. International Conference on Intelligent Systems for Molecular Biology, 138–148 (1999). 32. 32. Conesa, A. et al. Blast2GO: A universal tool for annotation, visualization and analysis in functional genomics research. Bioinformatics (Oxford, England) 21, 3674–3676. https://doi.org/10.1093/bioinformatics/bti610 (2005). 33. 33. Ye, J. et al. WEGO 2.0: a web tool for analyzing and plotting GO annotations, 2018 update. Nucleic Acids Res. 46, W71–W75. https://doi.org/10.1093/nar/gky400 (2018). 34. 34. Mortazavi, A., Williams, B. A., McCue, K., Schaeffer, L. & Wold, B. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat. Methods 5, 621–628. https://doi.org/10.1038/nmeth.1226 (2008). 35. 35. Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: A Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics 26, 139–140. https://doi.org/10.1093/bioinformatics/btp616 (2010). 36. 36. Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society., 289–300, doi:https://doi.org/10.2307/2346101 (1995). 37. 37. Xue, Z. Y. et al. Divergent evolution of oxidosqualene cyclases in plants. New Phytol. 193, 1022–1038. https://doi.org/10.1111/j.1469-8137.2011.03997.x (2012). 38. 38. Wendt, K. U., Schulz, G. E., Corey, E. J. & Liu, D. R. Enzyme mechanisms for polycyclic triterpene formation. Angew. Chem. Int. Ed. 39, 2812–2833 (2000). 39. 39. Aragão, G. F. et al. Antiplatelet activity of α- and β-amyrin, isomeric mixture from Protium heptaphyllum. Pharm. Biol. 45, 343–349 (2007). 40. 40. Han, J. Y., In, J. G., Kwon, Y. S. & Choi, Y. E. Regulation of ginsenoside and phytosterol biosynthesis by RNA interferences of squalene epoxidase gene in Panax ginseng. Phytochemistry 71, 36–46. https://doi.org/10.1016/j.phytochem.2009.09.031 (2010). 41. 41. Bouvier-Nave, P., Husselstein, T. & Benveniste, P. Two families of sterol methyltransferases are involved in the first and the second methylation steps of plant sterol biosynthesis. Eur. J. Biochem. 256, 88–96. https://doi.org/10.1046/j.1432-1327.1998.2560088.x (1998). 42. 42. Diener, A. C. et al. Sterol methyltransferase 1 controls the level of cholesterol in plants. Plant Cell 12, 853–870. https://doi.org/10.1105/tpc.12.6.853 (2000). 43. 43. Nelson, D. R. et al. P450 superfamily: update on new sequences, gene mapping, accession numbers and nomenclature. Pharmacogenetics 6, 1–42. https://doi.org/10.1097/00008571-199602000-00002 (1996). 44. 44. Yoshida, Y. & Aoyama, Y. The P450 superfamily: A group of versatile hemoproteins contributing to the oxidation of various small molecules. Stem Cells 12, 75–88 (1994). 45. 45. Stromstedt, M., Rozman, D. & Waterman, M. R. The ubiquitously expressed human CYP51 encodes lanosterol 14 alpha-demethylase, a cytochrome P450 whose expression is regulated by oxysterols. Arch. Biochem. Biophys. 329, 73–81. https://doi.org/10.1006/abbi.1996.0193 (1996). 46. 46. Gachotte, D., Barbuch, R., Gaylor, J., Nickel, E. & Bard, M. Characterization of the Saccharomyces cerevisiae ERG26 gene encoding the C-3 sterol dehydrogenase (C-4 decarboxylase) involved in sterol biosynthesis. Proc. Natl. Acad. Sci. USA 95, 13794–13799. https://doi.org/10.1073/pnas.95.23.13794 (1998). 47. 47. Haralampidis, K., Trojanowska, M. & Osbourn, A. E. Biosynthesis of triterpenoid saponins in plants. Adv. Biochem. Eng. Biotechnol. 75, 31–49 (2002). 48. 48. Baisted, D. J. Sterol and triterpene synthesis in the developing and germinating pea seed. Biochem. J. 124, 375–383. https://doi.org/10.1042/bj1240375 (1971). ## Acknowledgements This work was supported by the Talent Base Project of Organization Department in Guizhou Province, China (QRLF (2013) no. 15) (QRLF (2016) no. 23) (QRLF (2020) no. 2) and by the Department of Science and Technology, Guizhou, China (QKHJC (2018) 1042) (QKZYD (2018) 4018). We are grateful to the Guangzhou Gene Denovo Biotechnology Co., Ltd for assisting in the RNA sequencing and data analysis. ## Author information Authors ### Contributions Z.G.P. and L.Z. conceived, designed the study, and drafted the manuscript. Z.G.P. and Z.Z. contributed to the materials and samples. Z.G.P. and L.Z. analyzed the data and contributed to the tables and figures. All authors revised the manuscript and approved the final manuscript. ### Corresponding authors Correspondence to Zhong Li or Zhi Zhao. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Zeng, G., Li, Z. & Zhao, Z. Analysis of weighted gene co-expression network of triterpenoid-related transcriptome characteristics from different strains of Wolfiporia cocos. Sci Rep 11, 18207 (2021). https://doi.org/10.1038/s41598-021-97616-6 • Accepted: • Published:
2021-09-28 18:11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5197675824165344, "perplexity": 11200.346895245742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00547.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=18&t=64753
Sapling Homework $\lambda=\frac{h}{p}$ 805377003 Posts: 97 Joined: Wed Sep 30, 2020 10:10 pm Sapling Homework Can someone explain how to do this problem? As you may well know, placing metal objects inside a microwave oven can generate sparks. Two of your friends are arguing over the cause of the sparking, with one stating that the microwaves "herd" electrons into "pointy" areas of the metal object, from which the electrons jump from one part of the object to another. The other friend says that the sparks are caused by the photoelectric effect. Prove or disprove the latter idea using basic physics. Suppose the typical work function of the metal is roughly 4.570×10−19 J. Calculate the maximum wavelength in angstroms of the radiation that will eject electrons from the metal. Chem_Mod Posts: 19540 Joined: Thu Aug 04, 2011 1:53 pm Has upvoted: 882 times Re: Sapling Homework To solve this problem, know that the work function is equal to the minimum energy to eject an electron. Therefore, use this value and the equations c=(wavelength)(frequency) and E=h(frequency) to solve for the wavelength and then use dimensional analysis to find an answer in Angstroms! kateraelDis1L Posts: 104 Joined: Wed Sep 30, 2020 9:54 pm Re: Sapling Homework to add on^ an Angstrom unit of length is mostly used in measuring wavelengths of light, equal to 10^-10 meter or 0.1 nanometer.
2021-02-27 19:53:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42882436513900757, "perplexity": 1204.7246569991642}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00255.warc.gz"}
http://robust.cs.unm.edu/doku.php?id=people:james_vickers
# Robust-first Computing Wiki ### Site Tools people:james_vickers Bio I'm a new graduate student in the CS department here at UNM. I did my undergrad in CS here (graduated May 2014) with a minor in Business (yeah, yeah). “Mad-man” science like aritifical intelligence/life are part of what lured me to computing in the first place. I would say I'm much more interested in algorithms, paradigms, as opposed to some common interests like specific languages or games. I've been at Sandia as an intern for a year now; my group is mostly CS people that do various kinds of real-time data analysis software. Contact: jvick3@unm.edu Current title/abstract: Adaptive isolation for protection in the Movable Feast Machine The Movable Feast Machine (MFM) is a computational platform which encourages robustness and scalability by limiting the read/write access of its Elements. Even with this spatial access restriction, writes performed by these Elements can have adverse effects to important computations. We present an Adaptive Isolator to robustly separate Elements, proving very useful for prolonging their presence and integrity. The Adaptive Isolator provides strong protection with minimal overhead to the overall system. Journal 11/21/2014 As I was writing my paper a few days ago, I found myself trying to explain why my Isolator's are programming to only do their swapping routine (Bubble Repulsion) when they are at a particular distance from an Element (R-1). I could not for the life of me come up for why that was…I had originally done it because I thought I had seen it acting unstable without this check. It was kind of an artifact of previous failed implementations of Bubble Repulsion. I decided to remove this restriction and test it out. It appeared to work well, better even. I'm just about done re-running my experiment script with the slightly modified behavior; it looks like the protection performance of Isolator got better with this change. It basically allows more Isolator's to engage in repulsion by allowing them to do so regardless of their distance from their protectee Atom. Edit: Wow, allowing Isolator's at any distance to repulse (rather than just R-1) improved the protection performance pretty significantly. You can see the difference in some of my graphs, but the half-lifes really tell the tale. I define half-life as the time in AEPS it takes for half of the Atom's in an experiment to be wiped out (by Eraser, usually). A couple notable comparisons between limited and full repulsion: Eraser Distance 4, Inner Radius 3: Limited repulsion: Data Half-life = 2565, Eraser Half-life = 175 Full repulsion: Data Half-life = 16150 (+ 529.6%), Eraser Half-life = 180 (+ 2.9%) Note: Eraser half-life doesn't change much because with Eraser Distance of 4, Isolator cannot surround it for long! Eraser Distance 1, Inner Radius 2: Limited repulsion: Data Half-life = 5001, Eraser Half-life = 18765 Full Repulsion: Data Half-life = 17155 (+ 243.0%), Eraser Half-life = 39245 (+ 109.1%) I'm very happy with this improvement. It makes the algorithm simpler, and yet do its job a lot better. What's not to like? 11/05/2014 Just made a chunk of .mfs files (16) for use with an experiment script. All of the setups begin with some some Elements spaced across the grid such that no two are within an Event Window of each other, except Isolator's which are directly next to their intended protectee. Each tile has the same pattern of these Element's on it, which is symmetrical itself. I made 4 that have 8 Data and 8 Eraser per tile, varying the Eraser distance from 1-4 for each different .mfs file. I then also have 12 setups that have 8 Data, Eraser, and Isolator, with single Isolator's placed directly next to the Data's, and varying the Eraser distance and inner radius of the Isolators to cover all combinations. Example of one of these setups for a single tile: (Orange = Eraser, Purple = Isolator, Blue = Data) I also made a key change to my adversary, Eraser, that I've been considering for a while. Until now, Eraser would only set non-Eraser sites to Empty; this means that without other things around that destroy stuff (like Dreg), the number of Eraser's is constant. I had started to feel this was 'unfair' in some way, so now Eraser's will also erase each other - now being a brute is a two-edged sword. 10/29/2014 Just made a video of an interesting little usage of Isolator. It starts off with a dense mass of Wall, with one Isolator on the outside. The Bubble Replusion algorithm of Isolator breaks up that mass until they are all (almost) completely isolated from each other. Video here: 10/26/2014 Just finished writing a nice little script that takes those /tmp/dddd/tbd/data.dat files and plots the amount of specified elements versus time in AEPS. This is the first real Python program I'm written, and I must say I'm impressed with the language. The script takes args like this: ./plot_element_counts.py <data file> <title of plots> <spaced list of Element names> e.g. ./plot_element_counts.py /tmp/xxxx/tbd/data.data “a nice title” “Data” “Creg” “Empty” Which will make plots of counts versus AEPS for Data, Creg, and Empty. I think the script is general-purpose enough that most everyone in the class will find it useful for their project, so I think I'll ask Ackley if it's OK to share it. 10/20/2014 Crit 2 went well, but the way I explained my algorithm made it seem impossible. It is not impossible, but works in kind of a funny way that I should probably tweak a little to make more sensible. An Isolator I, when it sees an Element e0 at R-1 and some other Element e1 that has a distance of at least R from e0, increases the X and Y offsets of e0 from I if they are non-zero. This must be the only time that an Isolator actually swaps the Element away, when it is aligned with it in either X or Y. Basically the code for computing the offset looks like this (kind of clunky): SPoint away_offset, away_site; s32 site_X = site.GetX(); // site is the Element at distance R-1 from this Isolator s32 site_Y = site.GetY(); if (site_X < 0) // make offset in X be one unit away from this Isolator away_offset.SetX(-1); else if (site_X > 0) away_offset.SetX(1); else away_offset.SetX(0); // the key part, only time swap will actually work is when one offset is 0 if (site_Y < 0) // make offset in Y be one unit away from this Isolator away_offset.SetY(-1); else if (site_Y > 0) away_offset.SetY(1); else away_offset.SetY(0); away_site = site + away_offset; if (window.IsLiveSite(away_site) && away_site.GetManhattanLength() <= R) { window.SwapAtoms(site, away_site); } 10/18/2014 Last night I tried a big change to Isolator to try and make “Isolator bubbles” repel from each other. An Isolator seeing some non-Isolator Element E at distance D will look at all sites of at least distance R-D+1, which should be outside of the Event Window of E. If an Isolator is seen at any of those locations, Isolator swaps E in the opposite direction of the located Isolator. This setup is extremely unstable though and doesn't really accomplish repulsion. The issue I think is that when E moves, Isolators are frequently left behind for a bit as they have not gotten an Event yet. The remaining Isolators that do see E then also see these 'artifact' Isolators and swap E away from them. The result of that is that E is constantly being swapped around and its location is wildly unstable and tends towards edges or corners of the grid. Brief video of this here; it doesn't do justice of just how fast and violent it looks in real time though. Back to the drawing board… 10/14/2014 Made big changes to my abstract and title. Gutted a lot of the abstract and really tightened up the language. I think it's getting there. Also made the title very short - 9 words including “Movable Feast Machine”, which I kind of feel is necessary to include. 10/8/2014 I have demonstrated that I can make Isolator successfully defend Elements from a simple brute Element such as my prototype Eraser. Sadly, I did it in kind of a 'cheap' way. Now when Isolator sees an Element of type Eraser, it removes it from the MFM. This means that any other Element surrounded by Isolators is very unlikely to ever be within an Event Window of an Eraser, and is therefore safe from them. This tactic is cheap in the sense that it relies on knowing the type of Eraser. I'm thinking the next step for this little guy is more complicated; start giving Isolators some knowledge about the Element they are protecting (probably regardless of type), such which direction it is relative to them. That way, when Isolator sees an Element coming towards its nucleus and it doesn't know what to do with it, it could swap it in the opposite direction of its nucleus to keep it away from it. I could also see such a behavior being useful in reverse, as if an outside Element is deemed 'good' to the Element the Isolator is surrounding, Isolator could swap that Element to be next to the nucleus. I've put Isolator up to some pretty extreme tasks of protecting, say, a single Data against an entire grid of Erasers. So far it seems to do that job quite well. 10/6/2014 Made a couple edits to my project abstract and title from the feedback in class today. I removed the overly-strong claims of perfect isolation in the MFM and of making a system “more predictable”. Also cleaned up and removed some redundant or otherwise unexpressive language. Second version is here: 10/5/2014 Just implemented a simple “Eraser” Element to pit against my Isolator. Eraser has an Element Parameter “Erase Radius”, and when called to act, erases (sets to Element Empty) any non-Eraser non-Empty site within that distance of itself. My idea is to see how Isolator can protect its 'nucleus' from such a simple brute. So far, the news isn't great; Isolator is kind of “too nice” to be an effective protector. It won't write itself to occupied sites in order to protect its nucleus at present. The result of this is that a nucleus such as Data and Eraser both get surrounded by Isolator, but not enough distance is provided to prevent Eraser from seeing the interior Data atom. I'm thinking maybe Isolator can be changed up so he sees Eraser coming from the outside and destroys it. Or perhaps Isolator can destroy any Element that gets too close to it's “cell wall” that isn't the same type as its nucleus. 10/2/2014 Made a first draft of my project abstract. 9/27/2014 On Thursday, Trent helped me design and implement the next version of my “Isolator” Element. There was a bug for a while that made it do nothing (except delete itself immediately upon seeing another Element), but that got straightened out. Came out good! Basically acts like a cell or bubble that follows Elements around. Description of the behavior and a demo is up here: Update: Fixed some implementation errors and made the Isolator only wander/die if it is not currently surrounding some Element (i.e. sees one in its Event Window). Longer video with interaction with various MFM Elements here (including a shark/fish egg drop style challenge): 9/24/2014 Just finished a first-attempt at an “Isolator” Element that will surround other Elements with itself. The behavior of this first version is to look at every site within R-1 of its Event Window (all but the furthest) - if it sees anything that isn't Empty or itself, it copies itself to the furthest sites in its Event Window. This simple version does a good job of surrounding Elements (and re-surrounding them, even if Isolators are destroyed), but it makes a very tight region around the Element. Also, this version has no reproduction or similar population regulation functions. Brief video of its behavior is here, interacting with Wall: 9/23/2014 Talked with Ackley today about narrowing/improving my research idea. Now I'm thinking I want to focus more on the 'quarantining' aspect of it rather than the 'virus spread' aspect. Most of the stuff I was looking to implement with the virus - it's lifespan, spread, interaction with the host - aren't things I was looking to study or thought could have a novel or useful computation associated with them. Quarantiner, however, is interesting. Now the idea is to make Quarantiner a kind of general-purpose isolation Element - something that encloses Elements (perhaps configurable which one or it decides when it sees the first one, not sure yet) in a bubble. I think it will do this by surrounding itself around the Element(s), then adding layers around that initial bubble in order to make it large enough such that Elements within cannot see anything outside of the bubble. If the bubble is thick enough, these Elements cannot see outside from their Event Window. I'm still kind of thinking about how Quarantiner's can coordinate to surround groups of Element's, and maybe if they could move around as a group after making a bubble, sort of like a cell or membrane. I'm also thinking about how a Quarantiner could act as a kind of valve or mediator between the things in it's bubble and the outside world. 9/18/2014 Yesterday at the “MFM coder bootcamp”, Trent helped me brainstorm on how to make my virus idea more of a strong alife project and also how parts of it might work with the MFM architecture. These are the Elements I envision at present: Virus: • Seeks to spread and infect Host Elements • Has a “lifespan” Element Parameter that specifies how long it survives • Behavior: 1. Decrements its local “energy” Atomic Parameter by an Element Parameter “metabolic rate”. If this value is now less than or equal to 0, it deletes itself from the MFM. 2. If it sees a Host Element in its Event Window, it attemps to infect it. It can only infect a Host with both “infected” and “immune” bits set to 0. This infection is based on the “infection rate” Element Parameter of Virus (it may be 100% if Host is one cell away, 0% otherwise - as an example of a simple rule). 3. When a Virus successfully infects a Host, it will: 1. Set the “infected” Atomic Parameter of the Host to indicate the infection 2. Delete itself from the MFM 4. If the Virus cannot infect any Host in its Event Window, it moves towards the closest Host or does a random movement (one cell) if none are visible Host: • Gets infected by instances of Virus • Has a finite lifespan represented by an “Energy” Atomic Parameter, which starts at a “lifespan” Element Parameter • Behavior: 1. Decrement its energy value by an Element Parameter, “metabolic rate”. If its energy value is less than or equal to 0, it will delete itself from the MFM 2. If the “infection” bit of the Host instance is set to 1, the Host will: 1. Decrease its energy value by an amount specified by an Element Parameter “infection drain” of Virus 2. With probability defined by an “spread rate” Element Parameter of Virus, produce a new Virus instance in a random spot of distance 1 from itself 3. Increment an Atomic Parameter, “infected timer” 4. If the value of the “infected timer” is greater than a “infection time” Element Parameter of Virus, Host sets its “infection” bit to 0 and a “immune” bit to 1 (it can never get infected again and stops producing Virus) 3. Create a new instance of Host (reproduce) with probability determined by Element Parameter “reproduction rate” 4. Finally, Host will do a random walk Quarantiner • Attempts to stop the spread of Virus by trapping infected instances of Host with bubbles of Isolator Elements that last for some period of time • Behavior: 1. Decrement an Atomic Parameter “energy” by an Element Parameter “metabolic rate” 2. If “energy” is less than or equal to 0, delete itself from MFM 3. Looks in all but the farthest cells in its Event Window for an instance of Host with an “infection” bit set to 1. If one is found in any of these cells: 1. Quarantiner creates new instances of Isolator in all cells of the Event Window that are farthest from itself 2. Deletes itself from the MFM 4. Create a new instance of Quarantiner (reproduce) with probability determined by Element Parameter “reproduction rate” 5. If no infected Host is found, it will do a random walk Isolator • Used to form isolation chambers to trap Host instances infected with Virus in one area for a configurable amount of events • Has an Element Parameter that specifies a “lifespan” (does not reproduce) • Behavior: 1. If its “lifespan” parameter is less than or equal 0, it deletes itself from the MFM 2. Otherwise, it decrements its lifespan parameter by 1 9/09/2014 Tried a couple of ideas out for the “Egg drop challenge”. First, I tried kind of a cheating method - enclosing the single tile with a wall. Within, I made a miniature version of the fishsticks setup that Professor Ackley successfully used. This system does not seem to work, probably because it is very sensitive to fluctuations in population due to the small enclosed space. Best runs here were less than 5 kAEPS, just miserable. I also tried using Anti-Forkbomb (Af), which spreads out across the grid. The idea there was to make the “mud” that Ezra had used before with Data, but I can't really use Data for this because it does not reproduce; Anti-Forkbomb does. It does indeed spread out and make some obstacles across the universe, but not apparently in sufficient amount to cause any real difference. The best runs I could get with that setup were around 25 kAEPS. 9/07/2014 Trying out another scheme for the “shark week challenge” that I thought up and I think has been discussed by others. I'm calling it “the hatcheries”. The idea is to populate the world with some (appropriate number of) little areas enclosed by a Wall, such that there is a very small opening. Then, I will put a Fish population in each. Sharks will start outside of the hatcheries. The thinking here is that fish will escape the hatcheries at a hopefully steady rate determined by the size of the opening on each. This will hopefully regulate the supply of Fish being provided to the Sharks and therefore (again, HOPEFULLY) their population size. Setup looks something like this: Each hatchery is seeded with a small amount of Fish to start. There are Sharks spread out around the environment, with some Fish among them outside of the hatcheries to sustain them until the hatcheries start producing. I tried varying the hatchery volume and opening size to try and see how they could hold a Fish population relative to each other. This is a common scene of this hatchery ecosystem in action: The hatcheries do seem to sort of work, in that they fill up and then gradually let a steady amount of Fish out of them. The issue though, is that Sharks inevitably enter the hatcheries and decimate them. I guess this goes back to that discussion of a “valve” the class was having - a way to only allow one type of Element into an area. An upside to hatcheries though (despite their failure of being a safe and steady food provider) is that they seem to allow nice pockets of Fish to develop. When Sharks decimate a hatchery, they usually die within; cold, lonely, and wondering where all the Fish went. This leaves an empty hatchery, which usually gets repopulated by a lucky Fish that wanders into it without a Shark. If you watch the simulation for a while, you'll see hatcheries come and go - but there are usually at least a couple that are fully stocked. Thus, it appears that the Fish population is a little safer. Also, the sharks dying after gorging themselves on a hatchery controls a potential population boom therein - the hatcheries are acting as a kind of honeypot. As a control experiment, I also ran a simulation using the hatchery setup and initial populations as above, but then closed the entrances of all hatcheries with Walls. The purpose of this was to see if it was simply the segmentation of the world by the hatcheries, rather than the dynamic and function of the hatcheries, that seemed to make a difference. The control experiment sustained Shark populations for the following times in kAEPS: {1.739, 1.768}. In contrast, the actual hatchery version ran for around 15 kAEPS - still not great though. I do submit that a possible reason for the difference between the control and hatchery versions of this experiment is that there are more fish available to the Sharks in the hatchery version. 9/06/2014 Just had a kind of morbid idea for a research topic while I was grocery shopping today (yes, Wal-mart). I was looking around and thinking about how it seemed like the “dumbest looking” people (yup, I was being a judgmental bastard) had the most kids. That got me thinking. Is it possible that providing an ecosystem with ample resources, as our current civilization has done, actually leads to a weaker gene pool? That is, by making it easier to survive, does the population actually become less survivable and less robust? That's kind of what I saw at Wal-mart today. I was wondering if the number of smart people on Earth (I mean 'intelligent', not 'educated') was proportionally higher in times of greater struggle to survive. It would appear that in the modern world, the less-intelligent are perhaps more likely to have more offspring. I think such a topic is outside of the reach of a 12-week project, and probably a little out of scope of the class - it's more a question about real biological systems rather than artificial ones. 9/01/14 (01-Sep-2014 01:43:44PM-0600) Further exploring the shark week double challenge. I'm trying a recursive version of the segmenting (dotted cross) from yesterday, now with a grid of 16 cells instead of 4. Looks something like this: This setup has too many Sharks and WAY too many Fish. Here's a better one: I was starting to wonder if the grid segmentation was only helping by enforcing a segmentation of the Shark and Fish initial populations as shown above. So I tried this setup as a control experiment: This experiment ran for a while, but eventually all the Sharks perished at around 12 kAEPS. Sadly, the 16-cell grid doesn't seem to help - I could only get it to run for {5, 2} kAEPS compared to 12 kAEPS without the Walls present. 8/31/14 (31-Aug-2014 10:38:23PM-0600) Messing with some ideas for the “shark week double challenge”. As far as I can tell, the only element that will be useful is Wall. None of the other elements really seem to interact with the Shark or Fish at all, except to destroy them (as in the case of the several varieties of Bomb). So, I've been cooking up some possible Wall designs to prevent the simplistic endgame that we discussed in class; a large wave of Sharks meeting a large wave of Fish, allowing the Sharks to kill all Fish and finish the ecosystem. So, my plan here is to try to use Wall to turn the grid into a place where this is unlikely. Think coral reef. I came up with this idea first: My idea here was to try and prevent sharks and fish from forming large groups by turning the open grid world into a kind of maze. This strategy does NOT work. It actually makes Shark waves more effective with fewer numbers of them. Sharks can form a wave in the narrow passages of the maze and wind their way outward or inward, decimating the fish in their path. It was an idea, anyway. The next thing I tried was also a maze, but this time a circular one. I didn't really expect it to behave differently, but figured “eh, what the hell”. It suffers from the same issue as the square spiral. Considering what went wrong with these two implementations compared to my original intent, I had the thought of trying to segment the grid into smaller areas. This was partly based on what we said in class about the original Wa-tor and how small, localized, chaotic interactions were better for the ecosystem as a whole. I think this kind of implementation has potential. It reduces the movement of Sharks and Fish along the boundaries, but does not completely divide the world (i.e. there are spaces in the Walls). The effect of this seems to be to prevent waves of Sharks from forming. It also seems to help small amount of Fish escape Shark herds (by wandering around the other side of a Wall, for instance) to breed and re-populate. I intend to try some other designs as well using Walls. I had this idea in class (related to coral reefs) about making little circular enclosed areas, with one small exit. Then Sharks and some Fish would be placed outside these 'bubbles', and fish placed into the bubbles. I don't know if this will work or not, but the thinking was that the fish would multiply within the bubble, and then slowly trickle out of it, controlling both the Shark and Fish population through the trickle rate (which is in turn controlled by the size of the bubble openings). 8/28/14 Posting some of my research ideas for this class. 1.) Disease propagation and effectiveness: 1. Would probably involve adding a 'virus' object/lifeform to the MFM 2. This virus would infect a host, and spread to other hosts via host-to-host contact 3. These interactions could be controlled by parameters like probability of infection from contact, incubation period, and the time to kill or the host (or other adverse affects to the host like energy levels) 4. Effects studied could include disease spread and effectiveness, and methods of combating the virus 2.) Can betrayal within a species/group that's starving benefit the group as a whole? 1. Born out of an observation that the sharks in wa-tor often exhaust their food supply when their numbers overwhelm the fish, and the need for a “thinning of the herd” when this happens 2. The species being tested on could have a probability of attacking their own that is related to their hunger/energy level; the hungrier they are, the more likely they are to attack their own kind when encountered 3. Effects studied could include what benefit (if any) such behavior can have for the group and/or individual organisms, and the 'right' relationship between aggressiveness and resource scarcity/need These are the main two I'm considering. I've had a few other ones, feel free to steal these: 3.) Some kind of study into the effect an aging population has on a society (think “baby-boomers”). The implementation could involve something where old members of a species are weaker and require care from younger members to survive. Studied effects could include the aging rate vs amount of care required overall, and the benefit/cost relationship of caring for the elderly versus abandoning them (tisk, tisk). You could also flip this around and make it so the young members of a species require care/attention, like children, until they reach adulthood. 4.) This one is “out there” and probably way outside the scope of the course. I had this idea about seeing if you could get the organisms in an alife simulation to communicate in some way, by synthesizing some kind of language. I guess it would involve the organisms trying to communicate with rudimentary symbols, and maybe try and see if they develop some kind of common language (no matter how simple) if there are benefits to be had and they are accordingly motivated to do so. This one sounds pretty interesting, but REALLY hard to implement and get any results out of. 8/24/14: Finally got a good setup for the 'Shark Week' challenge. I actually got my simulation to live up to 90 kAEPS, with no sign of stopping. My parameters were: Birth Age (Fi): 13 Birth Age (Sh): 73 Energy_per_fish(Sh): 3 I also found the starting geographical distribution and amount of fishes and sharks to be important. I would fill the grid completely with fish, then use the paintbrush to draw a wavy line of sharks around the whole grid. I usually had the starting number of fishes to sharks be around 2:1 or 3:1. My saved MFM simulation is on my CS webpage, cs.unm.edu/~jvick3. If you load it, remember to change the relevant parameters (birth ages and energy per fish) or the population will die rather quickly :) 8/21/14: Looked around on the 'interwebs for some ideas about alife topics. Browsed some topics from the ISAL (International Society for Artificial Life). Currently I'm leaning towards something related to viruses and disease; perhaps digging down into what kind of properties make an effective vs ineffective pathogen in terms of survivability and proliferation. I've also had a few other ones, some more odd than others. I think a study into populations that have large segments of 'elderly' organisms that require care and help from younger ones could be interesting. I also think something relating to language/communication synthesis in a society of artifical beings could be cool.
2017-05-30 05:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4278098940849304, "perplexity": 1778.745913113276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613796.37/warc/CC-MAIN-20170530051312-20170530071312-00460.warc.gz"}
https://www.physicsforums.com/threads/theorems-prove.275871/
# Theorems prove 1. Nov 29, 2008 ### mbcsantin 1. The problem statement, all variables and given/known data Prove the following: For any sets A, B, C in a universe U: A n B = Universe iff A = Universe and B = Universe 2. Relevant equations none. 3. The attempt at a solution I tried to do the questions but im just not sure if i did it right. id appreciate if you can check my work and let me know what changes i have to make. thanks the symbol "n" means "intersect" U for Union Suppose A n B = U and suppose that A is a proper subset of U then x is an element of B but x is not an element of A n B since x is not an element of A 2. Nov 29, 2008 ### kidmode01 When proving these sorts of problems it is important to know what you need to show. What does it mean for two sets to be equal? It means that each set is a subset of each other. ie: Supposed A and B are two sets. If A=B then $$A \subseteq B$$ and $$B \subseteq A$$ It is also important to know what kind of proof we are dealing with. In this case it is an if and only if. So that means we have to prove both ways. First we prove If A n B = U then A = U and B = U. Second we prove the other way, If A = B and B = U then A n B = U. So you started off proving the one way. Suppose A n B = U. You do not need to suppose A is a proper subset of U because you are given that by definition in your problem. So you said x is an element of B, and then x is not an element of A n B because x is not an element of A. It very well may be in A!! We do not have enough information to conclude that if we pick an element in B, it can't be in A. Instead we should pick an element x inside A n B. Then x is an element of A and x is an element of B. Since A and B are subsets of U, x is an element of U....see where I'm going? We need to show A = U and B = U. That means we need to show A is a subset of U and U is a subset of A. Similarly for B. Well we already know that A is a subset of U by definition. But is U a subset of A? What information do we have? Start by picking an element out of U and showing that it is inside A using our assumptions. Then U will be a subset of A and thus A=U. A similar argument will be made for B. Then we have to prove the other way. If A=U and B=U then A n B = U. I hope this helps. 3. Dec 1, 2008 ### mbcsantin thanks! it really helps!
2017-06-24 10:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7810556292533875, "perplexity": 172.24287182663173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00485.warc.gz"}
http://mymathforum.com/calculus/339154-please-teach-me-how-do.html
My Math Forum Please teach me how to do this Calculus Calculus Math Forum February 21st, 2017, 12:32 AM #1 Member   Joined: Feb 2017 From: East U.S. Posts: 33 Thanks: 0 Please teach me how to do this The limit represents f '(c) for a function f(x) and a number c. Find f(x) and c. lim (Δx-->0) [7-8(1+Δx)]-1(-1)/Δx So the story is... my teacher assigns this as a homework problem that's gonna turn up on our exam tomorrow, but without going over a single problem with a triangle thing in it. I looked it up and apparently it's a "change" symbol? Please, I really just need the steps written down on how to solve this so I can at least try to learn a little before my next exam in about 11 hours. I'm willing to try to learn, and not just looking for the answer, but I have no time... ANY advice would be great, thanks! February 21st, 2017, 01:49 AM #2 Senior Member   Joined: Feb 2016 From: Australia Posts: 1,285 Thanks: 439 Math Focus: Yet to find out. The 'little triangle' is an uppercase delta in Greek. And yes, it usually means 'change in'. When you see $\Delta x$, you can say 'change in x'. Although this by itself doesn't mean much. I can't really make out what your function is supposed to be due to some peculiar bracketing. $\displaystyle \lim\limits_{\Delta x \rightarrow 0} \dfrac{7 - 8(1 + \Delta x)}{\Delta x}$ or, $\displaystyle \lim\limits_{\Delta x \rightarrow 0} 7 - 8(1 + \Delta x) +\dfrac{1}{\Delta x}$??? Or something else... February 21st, 2017, 02:46 AM   #3 Member Joined: Feb 2017 From: East U.S. Posts: 33 Thanks: 0 Quote: Originally Posted by Joppy The 'little triangle' is an uppercase delta in Greek. And yes, it usually means 'change in'. When you see $\Delta x$, you can say 'change in x'. Although this by itself doesn't mean much. I can't really make out what your function is supposed to be due to some peculiar bracketing. $\displaystyle \lim\limits_{\Delta x \rightarrow 0} \dfrac{7 - 8(1 + \Delta x)}{\Delta x}$ or, $\displaystyle \lim\limits_{\Delta x \rightarrow 0} 7 - 8(1 + \Delta x) +\dfrac{1}{\Delta x}$??? Or something else... Sorry, I don't know how to use the website... But lim Δx-->0 In the numerator of the function, I have "[7-8(1+Δx)]-(-1)" (there are brackets for some reason) And in the denominator, I just have a "Δx" Hopefully this clarified everything. Last edited by nbg273; February 21st, 2017 at 03:35 AM. February 21st, 2017, 03:48 AM #4 Math Team   Joined: Dec 2013 From: Colombia Posts: 6,876 Thanks: 2240 Math Focus: Mainly analysis and algebra $$f'(c)= \lim_{\Delta x \to 0} \frac{f(c+\Delta x) - f(c)}{\Delta x}$$ $f(c+\Delta x) = 7-8(1+\Delta x)$ and $f(c) = -1$. By inspection, you can then suggest a value for $c$ and an expression for $f(x)$. February 21st, 2017, 04:03 AM   #5 Member Joined: Feb 2017 From: East U.S. Posts: 33 Thanks: 0 Quote: Originally Posted by v8archie $$f'(c)= \lim_{\Delta x \to 0} \frac{f(c+\Delta x) - f(c)}{\Delta x}$$ $f(c+\Delta x) = 7-8(1+\Delta x)$ and $f(c) = -1$. By inspection, you can then suggest a value for $c$ and an expression for $f(x)$. Sorry, I'm still confused on what f(x) is... I'm bad at math... February 21st, 2017, 04:29 AM #6 Math Team   Joined: Jan 2015 From: Alabama Posts: 2,574 Thanks: 667 It looks like f(x) is intended to be f(x)= 7- 8x. Then $f(1)= 7- 8(1)= -1$ and $f(1+ \Delta x)= 7- 8(1+ \Delta x)= 7- 8- 8\Delta x$. So $f(1)- f(1+ \Delta x)= -1- (-1- 8\Delta x)= 8\Delta x$. That is, for all non-zero $\Delta x$, $\frac{f(1)- f(1+\Delta x)}{\Delta x}= \frac{8\Delta x}{\Delta x}= 8$. So what is the limit as $\Delta x$ goes to 0? February 21st, 2017, 05:17 AM   #7 Member Joined: Feb 2017 From: East U.S. Posts: 33 Thanks: 0 Quote: Originally Posted by Country Boy It looks like f(x) is intended to be f(x)= 7- 8x. Then $f(1)= 7- 8(1)= -1$ and $f(1+ \Delta x)= 7- 8(1+ \Delta x)= 7- 8- 8\Delta x$. So $f(1)- f(1+ \Delta x)= -1- (-1- 8\Delta x)= 8\Delta x$. That is, for all non-zero $\Delta x$, $\frac{f(1)- f(1+\Delta x)}{\Delta x}= \frac{8\Delta x}{\Delta x}= 8$. So what is the limit as $\Delta x$ goes to 0? Ohhh, I see now. Thanks for your help! Tags teach Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post MATHEMATICIAN New Users 12 August 23rd, 2013 02:23 AM MattJ81 New Users 7 July 29th, 2011 10:17 PM cameron Geometry 1 June 19th, 2009 05:35 AM MATHEMATICIAN Calculus 0 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2017-08-19 09:22:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 12, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175206780433655, "perplexity": 2171.123992861634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00486.warc.gz"}
https://forum.azimuthproject.org/plugin/viewcomment/19799
Keith wrote: >\$>\mathrm{hom}(f,g)(h)=\begin{cases} >u := g\circ h \circ f & \text{ if } target(f)=source(h) \\\\ >& \text{ and } target(h)=source(g)\\\\ >& \\\\ >\varnothing & \text{ otherwise.} >\end{cases} >\$ Thanks for this. Gave me a better perspective on how the hom functor works. Below is a diagram showing preservation of composition highlighting your hom gadget. ![homfunctor preservation of composition](http://aether.co.kr/images/homfunctor_composition.svg)
2021-09-18 08:00:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973475337028503, "perplexity": 6850.766032554599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00669.warc.gz"}
https://www.physicsforums.com/threads/stress-for-v-band-clamp-under-loading.892824/
1. Nov 9, 2016 formula428 I have a v-band clamp and I'm wanting to know the stresses for such a section. I'm interested in the forces and resultant stresses on the clamp from the flange and plug (inside the v band) opposing each other as pressure is increased. I thought of treating the section as an effective channel member in flexure such that the section is bending about an asymmetrical axis. Think of it as having a U, turning it upside down and then applying force/moment pushing down on the web. However, the stresses I'm getting are really high. Maybe I'm calculating it wrong? Right now I'm using (6 P L) / (b h^2) and treating the center "web" to simply be a rectangle. Last edited: Nov 9, 2016 2. Nov 11, 2016 Nidum They are tricky things to analyse properly . Let us see a picture or a drawing off the particular clamp that you are interested in . What forces do you think are acting in the clamp ? 3. Nov 11, 2016 formula428 Here's a rough FBD and my thoughts. File size: 46.5 KB Views: 61
2017-10-17 00:56:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064987659454346, "perplexity": 2092.5740728281094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00009.warc.gz"}
https://eprints.soton.ac.uk/412380/
The University of Southampton University of Southampton Institutional Repository # Discrete multi-tone digital subscriber loop performance in the face of impulsive noise Bai, Tong, Zhang, Hongming, Zhang, Rong, Yang, Lieliang, Al Rawi, Anas F., Zhang, Jiankang and Hanzo, Lajos (2017) Discrete multi-tone digital subscriber loop performance in the face of impulsive noise. IEEE Access. Record type: Article ## Abstract As an important solution to "the last mile" access, Digital Subscriber Loops (DSL) are still maintained in a huge plant to support low-cost but high-quality broadband network access through telephone lines. The Discrete multi-tone (DMT) transmissions constitute a baseband version of the ubiquitous orthogonal frequency division multiplexing (OFDM). While the DMT is ideally suited to deal with the frequency selective channel in DSL, the presence of bursty impulsive noise tends to severely degrade the transmission performance. In this paper, we analyse the statistics of impulsive noise and its effects on the received signals, with the aid of a hidden semi-Markov process. The closed-form BER expression is derived for the DMT system for $Q$-ary quadrature amplitude modulation (QAM) under practical noise conditions and for measured dispersive DSL channels. Instead of relying on the simplified stationary and impulsive noise process, our noise model considers both the temporal and spectral characteristics, based on the measurement results. The simulation results confirm the accuracy of the formulas derived and quantify the impact both of the impulsive noise and of the dispersive channel in DSL. Text PA_IN_DSL - Accepted Manuscript Accepted/In Press date: 26 May 2017 e-pub ahead of print date: 6 June 2017 ## Identifiers Local EPrints ID: 412380 URI: http://eprints.soton.ac.uk/id/eprint/412380 PURE UUID: 549ab6ea-b5c2-4847-b08c-517985f886a6 ORCID for Lieliang Yang: orcid.org/0000-0002-2032-9327 ORCID for Jiankang Zhang: orcid.org/0000-0001-5316-1711 ORCID for Lajos Hanzo: orcid.org/0000-0002-2636-5214 ## Catalogue record Date deposited: 17 Jul 2017 13:34 ## Contributors Author: Tong Bai Author: Hongming Zhang Author: Rong Zhang Author: Lieliang Yang Author: Anas F. Al Rawi Author: Jiankang Zhang Author: Lajos Hanzo ## University divisions View more statistics Contact ePrints Soton: eprints@soton.ac.uk ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2 This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use. We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website. ×
2020-09-25 10:29:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21300403773784637, "perplexity": 7263.931731995107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00770.warc.gz"}
https://www.jonathangilligan.org/publications/
## Published ### 2018 • Topic modeling the president: Conventional and computational methods,” , , & , George Washington Law Review 86, 1243–1315. • Employee energy benefits: What are they and what effects do they have on employees?,” et al., Energy Efficiency. • Urban water conservation policies in the United States,” et al., Earth’s Future 6, 955–967. • Climate modeling: Accounting for the human factor,” , Nature Climate Change 8, 14–15. ### 2017 • Widespread infilling of tidal channels and navigable waterways in human-modified tidal deltaplain of southwest Bangladesh,” et al., Elementa 5, 78. • A machine-learning approach to forecasting remotely sensed vegetation health,” , , & , International Journal of Remote Sensing 39, 1800–1816. • Beyond politics: The private governance response to climate change, & (Cambridge University Press). • Are cops on the science beat?,” , Issues in Science and Technology 34, 6–8. • Climate and community: The human rights, livelihood, and migration impacts of climate change,” et al., in D. Manou et al. (eds.), Climate change, migration, and human rights, 189–202 (Routledge). ### 2016 • Agricultural adaptation to drought in the Sri Lankan dry zone,” & , Applied Geography 77, 92–100. • Betting and belief: Prediction markets and attribution of climate change,” , , & , in T.M.K. Roeder et al. (eds.), Proceedings of the 2016 Winter Simulation Conference, 1666-1677 (IEEE Press). • Dynamics of individual and collective agricultural adaptation to water scarcity,” & , in T.M.K. Roeder et al. (eds.), Proceedings of the 2016 Winter Simulation Conference, 1678-1689 (IEEE Press). • Application of machine learning to the prediction of vegetation health,” , , & , ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2, 465–469. • Drought, risk, and institutional politics in the American Southwest,” et al., Sociological Forum 31, 807–827. • Drinking water insecurity: Water quality and access in coastal south-western Bangladesh,” et al., International Journal of Environmental Health Research 26, 508–524. • Spatiotemporal patterns of agricultural drought in Sri Lanka: 1881–2010,” , , & , International Journal of Climatology 36, 563–575. ### 2015 • Beyond gridlock,” & , Columbia Journal of Environmental Law 40, 217–303. • Data-driven dynamic decision models,” & , in L. Yilmaz et al. (eds.), Proceedings of the 2015 Winter Simulation Conference, 2752-2763 (IEEE Press). • Environment, political economies, and livelihood change,” , , & , in B. Mallick & B. Etzold (eds.), Environment, migration and adaptation: Evidence and politics of climate change in Bangladesh, 27–39 (AHDPH). • Flood risk of natural and embanked landscapes on the Ganges-Brahmaputra tidal delta plain,” et al., Nature Climate Change 5, 152–157. • Participatory simulations of urban flooding for learning and decision support,” et al., in L. Yilmaz et al. (eds.), Proceedings of the 2015 Winter Simulation Conference, 3174-3175 (IEEE Press). • Reply to ‘Tidal river management in Bangladesh’,” et al., Nature Climate Change 5, 492–493. • Water conservation and hydrological transitions in cities in the United States,” , , & , Water Resources Research 51, 4635–4649. ### 2014 • Accounting for political feasibility in climate instrument choice,” & , Virginia Environmental Law Journal 32, 1–26. • Energy and climate change: A climate prediction market,” , , & , UCLA Law Review 61, 1962–2017. ### 2013 • Building resilience to environmental stress in coastal Bangladesh: An integrated social, environmental, and engineering perspective,” , , & , in Bridging the policy-action divide: Challenges and prospects for Bangladesh (Bangladesh Development Initiative). • Farming practices and anthropogenic delta dynamics,” et al., in Deltas: Landforms, ecosystems and human activities, 133-142 (Int’l. Assoc. Hydrolog. Sci.). ### 2011 • Energy and climate change: Key lessons for implementing the behavioral wedge,” et al., Journal of Energy & Environmental Law 2, 61–67. • Macro-risks: The challenge for rational risk regulation,” & , Duke Environmental Law and Policy Forum 21, 401–431. ### 2010 • Device and methods for detecting the response of a plurality of cells to at least one analyte of interest , et al., Patent #7,713,733 B2, issued 11 May. 2010. ABSTRACT • Apparatus and methods for monitoring the status of a metabolically active cell , et al., Patent #7,704,745 B2, issued 27 Apr. 2010. ABSTRACT • Design principles for carbon emissions reduction programs,” et al., Environmental Science & Technology 44, 4847–4848. • Implementing the behavioral wedge: Designing and adopting effective carbon emissions reduction programs,” et al., Environmental Law Reporter 40, 547–554. • People should behave ethically for the sake of future generations,” , in R. Espejo (ed.), Opposing viewpoints: Ethics, 20–32 (Gale). • The behavioral wedge,” et al., Significance 7, 17–20. ### 2009 • Costly myths: An analysis of idling beliefs and behavior in personal motor vehicles,” et al., Energy Policy 37, 2881–2888. • Household actions can provide a behavioral wedge to rapidly reduce U.S. carbon emissions,” et al., PNAS 106, 18452–18456. • The potential of dual camera systems for multimodal imaging of cardiac electrophysiology and metabolism,” et al., Experimental Biology and Medicine 234, 1355–1372. ### 2008 • Individual carbon emissions: The low-hanging fruit,” , , & , UCLA Law Review 55, 1701–1758. ### 2007 • A high-voltage cardiac stimulator for field shocks of a whole heart in a bath,” et al., Review of Scientific Instruments 78, 104302–104309. ABSTRACT ### 2006 • Flexibility, clarity, and legitimacy: Considerations for managing nanotechnolgy risks,” , Environmental Law Reporter 36, 10924–10930. ### 2003 • Time-resolved light scattering measurements of cartilage and cornea denaturation due to free-electron laser radiation,” et al., Journal of Biomedical Optics 8, 216–222. ABSTRACT ### 2002 • Defect transition energies and the density of electronic states in hydrogenated amorphous silicon,” et al., Journal of Non-Crystalline Solids 299, 621–625. ABSTRACT • Surface characterisation by near-field microscopy and atomic force microscopy,” et al., Advances in Science and Technology 32, 183–192. ABSTRACT ### 2001 • Infrared free-electron laser photoablation of diamond films,” et al., in Nonresonant laser-matter interaction (nlmi-10), 206-211 (International Society for Optics; Photonics). ABSTRACT • Spectroscopic scanning near-field optical microscopy with a free electron laser: $\ce{CH2}$ bond imaging in diamond films,” et al., Journal of Microscopy 202, 446–450. ABSTRACT ### 2000 • Alteration of absorption coefficients of tissue water as a result of heating under IR FEL radiation with different wavelengths,” et al., in International biomedical optics symposium, 78 (SPIE). ABSTRACT • Materials science at the WM Keck free electron laser: Infrared wavelength selective materials modification,” et al., Condensed Matter Theories 14, 349–364. ABSTRACT • Scanning near field infrared microscopy using chalcogenide fiber tips,” et al., Materials Letters 42, 339–344. ABSTRACT ### 1999 • Chemical contrast observed at a III-V heterostructure by scanning near-field optical microscopy,” et al., Physica Status Solidi A: Applied Research 175, 345–349. ABSTRACT • Effect of wavelength on threshold and kinetics of tissue denaturation under laser radiation,” et al., in International biomedical optics symposium, 122-129 (SPIE). ABSTRACT • Fabrication of single-mode chalcogenide fiber probes for scanning near-field infrared optical microscopy,” et al., Optical Engineering 38, 1381–1385. ABSTRACT • Interface applications of scanning near-field optical microscopy with a free electron laser,” et al., Physica Status Solidi A: Applied Research 175, 317–329. ABSTRACT • Nonlinear energy-selective nanoscale modifications of materials and dynamics in metals and semiconductors,” et al., Soviet Physics: Technical Physics 44, 1069–1072. ABSTRACT • Singlemode chalcogenide fiber infrared SNOM probes,” et al., Ultramicroscopy 77, 77–81. ABSTRACT ### 1998 • Coupled electron-hole dynamics at the $\ce{Si/SiO2}$ interface,” et al., Physical Review Letters 81, 4224–4227. ABSTRACT • First experimental results with the free electron laser coupled to a scanning near-field optical microscope,” et al., Physica Status Solidi A: Applied Research 170, 241–247. ABSTRACT • Free-electron-laser near-field nanospectroscopy,” et al., Applied Physics Letters 73, 151–153. ABSTRACT • Infrared wavelength-selective photodesorption on diamond surfaces,” et al., Applied Surface Science 129, 59–63. ABSTRACT • Molecular effects in measured sputtering yields on gold at near threshold energies,” et al., Izvestiya Akademii Nauk: Seriya Fizicheskaya 62, 676–679. ABSTRACT • New molecular collisional interaction effect in low-energy sputtering,” et al., Physical Review Letters 81, 550–553. ABSTRACT ### 1997 • Evaluation of source gas lifetimes from stratospheric observations,” et al., Journal of Geophysical Research: Atmospheres 102, 25543–25564. ABSTRACT • Photoexcitation spectroscopy and material alteration with free-electron laser,” et al., Acta Physica Polonica A 91, 689–696. ABSTRACT ### 1996 • Airborne gas chromatograph for in situ measurements of long-lived species in the upper troposphere and lower stratosphere,” et al., Geophysical Research Letters 23, 347–350. ABSTRACT • Quantifying transport between the tropical and mid-latitude lower stratosphere,” et al., Science 272, 1763–1768. ABSTRACT ### 1995 • Estimates of total organic and inorganic chlorine in the lower stratosphere from in situ measurements during aase ii,” et al., Journal of Geophysical Research 100, 3057–3064. ABSTRACT ### 1994 • Refinement of the total organic and inorganic chlorine budgets in the atmosphere with a new in situ instrument, airborne chromatograph for atmospheric trace species (ACATS-IV),” et al., in Atmospheric effects of aviation project workshop . ABSTRACT ### 1993 • $\ce{H2}$, $\ce{D2}$, and $\ce{HD}$ ionization potentials by accurate calibration of several iodine lines,” et al., Physical Review A 47, 4042–4045. ABSTRACT • Interference in the resonance fluorescence of two trapped atoms,” et al., in Proceedings of the 11th International Conference on Laser Science, 43-48 . ABSTRACT • Light scattered from two atoms,” et al., in Proceedings of the 11th International Conference on Laser Science, 410-419 . ABSTRACT • Quantum measurements of trapped ions,” et al., Vistas in Astronomy, 169–183. ABSTRACT • Quantum projection noise: Population fluctuations in two-level systems,” et al., Physical Review A 47, 3554–3570. ABSTRACT • Ultra-high precision spectroscopy for fundamental physics,” et al., Hyperfine Interactions 78, 211–220. ABSTRACT • Young’s interference experiment with light scattered from two atoms,” et al., Physical Review Letters 70, 2359–2362. ABSTRACT ### 1992 • Ionic crystals in a linear Paul trap,” et al., Physical Review A 45, 6493–6501. ABSTRACT • Linear trap for high-accuracy spectroscopy of stored ions,” et al., Journal of Modern Optics 39, 233–242. ABSTRACT • Precise determinations of ionization potentials and $EF$ state energy levels of $\ce{H2}$, $\ce{HD}$, and $\ce{D2}$,” & , Physical Review A 46, 3676–3690. ABSTRACT ### 1991 • High-resolution spectroscopy of laser-cooled ions,” et al., in Proceedings of the Enrico Fermi summer school on laser manipulation of atoms and ions, July 1991, Varenna, Italy, 539-551 . ABSTRACT • High-resolution three-photon spectroscopy and multiphoton interference in molecular hydrogen,” & , Physical Review A 43, 6406–6409. ABSTRACT • Precise multiphoton spectroscopy of the $\ce{H2}$, $\ce{HD}$, and $\ce{D2}$ molecules and a new determination of the ionization potential of $\ce{HD}$, , Ph.D. dissertation. (Yale University). Thesis ABSTRACT • Recent experiments on trapped ions at the National Institute of Standards and Technology,” et al., in Proceedings of the Enrico Fermi summer school on laser manipulation of atoms and ions, July 1991, Varenna, Italy, 553-567 . ABSTRACT ### 1989 • Measurement of high Rydberg states and the ionization potential of $\ce{H2}$,” et al., Physical Review A 39, 2260–2263. ABSTRACT ### 1988 • Precise multiphoton spectroscopy of $\ce{H2}$,” , , & , in Advances in laser spectroscopy III, 331-333 . ABSTRACT ### 1987 • Precise multiphoton spectroscopy of excited states of $\ce{H2}$,” & , in Advances in laser spectroscopy II, 388-390 . ABSTRACT • Precise photodissociation and multiphoton spectroscopy of $\ce{H2}$,” , , & , in Proceedings of the XV International Conference on Quantum Electronics, 58-60 (Optical Society of America). ABSTRACT • Precise two-photon spectroscopy of $E\leftarrow X^*$ intervals in $\ce{H2}$,” et al., Physical Review A 36, 3486–3489. ABSTRACT Much of my research has centered on understanding environmental stress in coupled human-natural systems. I work in interdisciplinary teams of social & behavioral scientists, natural scientists, and engineers, and my focus is often on using statistical tools and computational modeling to integrate the societal, natural, and constructed aspects. My work is divided between studying policy responses to climate change and water scarcity in the United States and studying impacts and adaptations to environmental stress in rural farming communities in Bangladesh and Sri Lanka. Some of the tools I use heavily in this work is Bayesian statistical analysis and agent-based modeling. Curriculum Vitae  PDF
2018-11-18 18:31:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5164235234260559, "perplexity": 13831.982568348187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00348.warc.gz"}
https://lists.gnu.org/archive/html/lilypond-user/2018-04/msg00357.html
lilypond-user [Top][All Lists] ## Re: \include command and local network folders From: David Wright Subject: Re: \include command and local network folders Date: Thu, 12 Apr 2018 15:58:07 -0500 User-agent: Mutt/1.5.21 (2010-09-15) ```On Thu 12 Apr 2018 at 12:13:21 (-0700), foxfanfare wrote: > David Wright wrote > > I see no filenames. I only see //192.168.1.13/Public/test.ly which > > looks like an incomplete URL, but lacking its protocol (like HTTP:). > > I can only assume when you mapped the files to a drive letter, > > you got the syntax correct. Would this reference help? > > > > https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx > > I don't understand David : \\192.168.0.13\Public\test.ly is a correct > Windows path. Then I think that's the filename you need to use. I assume "Starting lilypond-windows…" means you're running on a windows box. > I just changed the original "\" with the unix "/" for Frescobaldi to deal > with (//192.168.0.13/Public/test.ly) > I think it is named SMB right? Samba implements SMB on unix, yes, but programs on windows should use their own filename syntax to reference the files transparently. In linux, you'd typically see something like //192.168.0.13/Public if you were mounting a share as a client. Similarly you could only hand a filename like //192.168.0.13/Public/test.ly to a client that already knew what protocol to use, like smbclient. So try using the filename you wanted to specify, rather than second guessing the way the system deals with it. I'd be interested
2019-08-21 15:10:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210169911384583, "perplexity": 11963.81421423557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00393.warc.gz"}
https://halshs.archives-ouvertes.fr/halshs-00340381
# How to score alternatives when criteria are scored on an ordinal scale Abstract : We address in this paper the problem of scoring alternatives when they are evaluated with respect to several criteria on a finite ordinal scale $E$. We show that in general, the ordinal scale $E$ has to be refined or shrunk in order to be able to represent the preference of the decision maker by an aggregation operator belonging to the family of mean operators. The paper recalls previous theoretical results of the author giving necessary and sufficient conditions for a representation of preferences, and then focusses on describing practical algorithms and examples. Keywords : Document type : Journal articles Domain : https://halshs.archives-ouvertes.fr/halshs-00340381 Contributor : Michel Grabisch <> Submitted on : Thursday, November 20, 2008 - 4:55:50 PM Last modification on : Tuesday, March 27, 2018 - 11:48:05 AM Long-term archiving on : Monday, June 7, 2010 - 9:47:54 PM ### File jmcda06.pdf Files produced by the author(s) ### Citation Michel Grabisch. How to score alternatives when criteria are scored on an ordinal scale. Journal of Multi-Criteria Decision Analysis, Wiley, 2008, 15 (1-2), pp.31-44. ⟨10.1002/mcda.422⟩. ⟨halshs-00340381⟩ Record views
2019-11-17 09:45:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3123733103275299, "perplexity": 2126.8368036984784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00492.warc.gz"}
https://thomas.arildsen.org/category/signal-processing/
## Category: Signal processing ### Magni 1.6.0 released Our newest version of the Magni software package was just released on the 2nd of November. This particular release has some interesting features we (the team behind the Magni package) hope some of you find particularly interesting. The major new features in this release are approximate message passing (AMP) and generalised approximate message passing (GAMP) estimation algorithms for signal reconstruction. These new algorithms can be found in the magni.cs.reconstruction.amp and magni.cs.reconstruction.gamp modules, respectively. Note that the magni.cs sub-package contains algorithms applicable to compressed sensing (CS) and CS-like reconstruction problems in general – and not just atomic force microscopy (AFM). If you are not familiar with the Magni package and are interested in compressed sensing and/or atomic force microscopy, we invite you to explore the functionality the package offers. It also contains various iterative thresholding reconstruction algorithms, dictionary and measurement matrices for 1D and 2D compressed sensing, various features for combining this with AFM imaging, and mechanisms for validating function input and storing meta-data to aid reproducibility. The Magni package was designed and developed with a strong focus on well-tested, -validated and -documented code. The Magni package is a product of the FastAFM research project. • The package can be found on GitHub where we continually release new versions: GitHub – release 1.6.0 here. • The package documentation can be read here: Magni documentation • The package can be installed from PyPI or from Anaconda. ### iTWIST’16 Keynote Speakers: Gerhard Wunder iTWIST’16 is starting less than two weeks from now and we have 46 participants coming to Aalborg for the event (and I can still squeeze in a couple more – single-day registrations possible – so contact me if you are interested; only 4 places left before I have to order a bigger bus for the banquet dinner 🙂 ). Our next keynote speaker in line for the event is Gerhard Wunder, head of the Heisenberg Communications and Information Theory Group. Gerhard Wunder recently came to Freie Universität Berlin from Technische Universität Berlin. Dr. Wunder is currently heading two research projects: the EU FP7 project 5GNOW and PROPHYLAXE funded by the German Ministry of Education and Research and is a member of the management team of the EU H2020 FANTASTIC-5G project. Currently he receives funding in the German DFG priority programs SPP 1798 CoSIP (Compressed Sensing in Information Processing), and the upcoming SPP 1914 Cyber-Physical Networking. Gerhard Wunder conducts research in wireless communication technologies and has recently started introducing principles of sparsity and compressed sensing into wireless communication. As an example of this, Gerhard Wunder recently published the paper “Sparse Signal Processing Concepts for Efficient 5G System Design” in IEEE Access together with Holger Boche, Thomas Strohmer, and Peter Jung. At the coming iTWIST workshop, Gerhard Wunder is going to introduce us to the use of compressive sensing in random access medium access control (MAC), applied in massive machine-type communications – a major feature being extensively researched for coming 5G communication standards. The abstract of Dr. Wunder’s talk reads: Compressive Coded Random Access for 5G Massive Machine-type Communication Massive Machine-type Communication (MMC) within the Internet of Things (IoT) is an important future market segment in 5G, but not yet efficiently supported in cellular systems. Major challenge in MMC is the very unfavorable payload to control overhead relation due to small messages and oversized Medium Access (MAC) procedures. In this talk we follow up on a recent concept called Compressive Coded Random Access (CCRA) combining advanced MAC protocols with Compressed Sensing (CS) based multiuser detection. Specifically, we introduce a “one shot” random access procedure where users can send a message without a priori synchronizing with the network. In this procedure a common overloaded control channel is used to jointly detect sparse user activity and sparse channel profiles. In the same slot, data is detected based on the already available information. In the talk we show how CS algorithms and in particular the concept of hierarchical sparsity can be used to design efficient and scalable access protocols. The CCRA concept is introduced in full detail and further generalizations are discussed. We present algorithms and analysis that proves the additional benefit of the concept. ### iTWIST’16 Keynote Speakers: Holger Rauhut At this year’s international Travelling Workshop on Interactions between Sparse models and Technology (iTWIST) we have keynote speakers from several different scientific backgrounds. Our next speaker is a mathematician with a solid track record in compressed sensing and matrix/tensor completion: Holger Rauhut. Holger Rauhut is Professor for Mathematics and Head of Chair C for Mathematics (Analysis) at RWTH Aachen University. Professor Rauhut came to RWTH Aachen in 2013 from a position as Professor for Mathematics at the Hausdorff Center for Mathematics, University of Bonn since 2008. Professor Rauhut has, among many other things, written the book A Mathematical Introduction to Compressive Sensing together with Simon Foucart and published important research contributions about structured random matrices. At the coming iTWIST workshop I am looking very much forward to hearing Holger Rauhut speak about low-rank tensor recovery. This is especially interesting because, while the compressed sensing (one-dimensional) or the matrix completion (two-dimensional) problems are relatively straightforward to solve, things start getting much more complicated when you try to generalise it from ordinary vectors or matrices to higher-order tensors. Algorithms for the general higher-dimensional case seem to be much more elusive and I am sure that Holger Rauhut can enlighten us on this topic (joint work with Reinhold Schneider and Zeljka Stojanac): Low rank tensor recovery An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain. ### iTWIST’16 Keynote Speakers: Florent Krzakala Note: You can still register for iTWIST’16 until Monday the 1st of August! Our next speaker at iTWIST’16 is Florent Krzakala. Much like Phil Schniter – the previous speaker presented here – Florent Krzakala has made important and enlightening contributions to the Approximate Message Passing family of algorithms. Florent Krzakala is Professor of Physics at École Normale Supérieure in Paris, France. Professor Krzakala came to ENS in 2013 from a position as Maître de conférence in ESPCI, Paris (Laboratoire de Physico-chimie Theorique) since 2004. Maître de conférence is a particular French academic designation that I am afraid I am going to have to ask my French colleagues to explain to me 😉 Where Phil Schniter seems to have approached the (G)AMP algorithms, that have become quite popular for compressed sensing, from an estimation-algorithms-in-digital-communications-background, Florent Krzakala has approached the topic from a statistical physics background which seems to have brought a lot of interesting new insight to the table. For example, together with Marc Mézard, Francois Sausset, Yifan Sun, and Lenka Zdeborová he has shown how AMP algorithms are able to perform impressively well compared to the classic l1-minimization approach by using a special kind of so-called “seeded” measurement matrices in “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices“. At this year’s iTWIST workshop in a few weeks, Professor Krzakala is going to speak about matrix factorisation problems and the approximate message passing framework. Specifically, we are going to hear about: Approximate Message Passing and Low Rank Matrix Factorization Problems A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph. I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters. ### iTWIST’16 Keynote Speakers: Phil Schniter With only one week left to register for iTWIST’16, I am going to walk you through the rest of our keynote speakers this week. Our next speaker is Phil Schniter. Phil Schniter is Professsor in Electrical and Computer Engineering at Department of Electrical and Computer Engineering at Ohio State University, USA. Professor Schniter joined the Department of Electrical and Computer Engineering at OSU after graduating with a PhD in Electrical Engineering from Cornell University in 2000. Phil Schniter also has industrial experience from Tektronix from 1993 to 1996 and has been a visiting professor at Eurecom (Sophia Antipolis, France) from October 2008 through February 2009, and at Supelec (Gif sur Yvette, France) from March 2009 through August 2009. Professor Schniter has published an impressive selection of research papers; previously especially within digital communication. In recent years he has been very active in the research around generalised approximate message passing (GAMP). GAMP is an estimation framework that has become popular in compressed sensing / sparse estimation. The reasons for the success of this algorithm (family), as I see it, are that the algorithm estimates under-sampled sparse vectors with comparable accuracy to the classic l1-minimisation approach in compressed sensing and favourable computational complexity. At the same time, the framework is easily adaptable to many kinds of different signal distributions and other types of structure than plain sparsity. If you are dealing with a signal that is not distributed according to the Laplace distribution that the l1-minimisation approach implies, you can adapt GAMP to this other (known) distribution and achieve better reconstruction capabilities than the l1-minimisation. Even if you don’t know the distribution, GAMP can also be modified to estimate it automatically and quite efficiently. This and many other details are among Professor Schniter’s contributions to the research on GAMP. At this year’s iTWIST, Phil Schniter will be describing recent work on robust variants of GAMP. In details, the abstract reads (and this is joint work with Alyson Fletcher and Sundeep Rangan): Robust approximate message passing Approximate message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix “A” is large and well modeled as i.i.d. sub-Gaussian, AMP’s behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix. In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A’A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP. ### iTWIST’16 Keynote Speakers: Karin Schnass Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe. Next up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck. Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with Rémi Gribonval. At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements: From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem: If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D: where we take our measurements of x using some measurement matrix M: and we reconstruct the sparse vector α as follows: The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation β of it: Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows: Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary. Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler): While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions. ### iTWIST’16 Keynote Speakers: Lieven Vandenberghe The workshop program has been ready for some time now, and we are handling the final practicalities to be ready to welcome you in Aalborg in August for the iTWIST’16 workshop. So now I think it is time to start introducing you to our – IMO – pretty impressive line-up of keynote speakers. First up is Prof. Lieven Vandenberghe from UCLA. Prof. Vandenberghe is an expert on convex optimisation and signal processing and is – among other things – well known for his fundamental textbook “Convex Optimization” together with Steven Boyd. Lieven Vandenberghe is Professor in the Electrical Engineering Department at UCLA. He joined UCLA in 1997, following postdoctoral appointments at K.U. Leuven and Stanford University, and has held visiting professor positions at K.U. Leuven and the Technical University of Denmark. In addition to “Convex Optimization”, he also edited the “Handbook of Semidefinite Programming” with Henry Wolkowicz and Romesh Saigal. At iTWIST, I am looking forward to hearing him speak about Semidefinite programming methods for continuous sparse optimization. So far, it is my impression that most theory and literature about compressed sensing and sparse methods has relied on discrete dictionaries consisting of a basis or frame of individual dictionary atoms. If we take the discrete Fourier transform (DFT) as an example, the dictionary has fixed atoms corresponding to a set of discrete frequencies. More recently, theories have started emerging that allow continuous dictionaries instead (see for example also the work of Ben Adcock, Anders Hansen, Bogdan Roman et al. as well). As far as I understand – a generalisation that in principle allows you to get rid of the discretised atoms and consider any atoms on the continuum “in between” as well. This is what Prof. Vandenberghe has planned for us so far (and this is joint work with Hsiao-Han Chao): We discuss extensions of semidefinite programming methods for 1-norm minimization over infinite dictionaries of complex exponentials, which have recently been proposed for superresolution and gridless compressed sensing. We show that results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple constructive proofs for the semidefinite representations of the penalties used in these problems. The connection leads to extensions to more general dictionaries associated with linear state-space models and matrix pencils. The results will be illustrated with applications in spectral estimation, array signal processing, and numerical analysis. ### iTWIST’16 is taking shape This year’s international Traveling Workshop on Interactions Between Sparse Models and Technology is starting to take shape now. The workshop will take place on the 24th-26th of August 2016 in Aalborg. See also this recent post about the workshop. By Alan Lam (CC-BY-ND) Aalborg is a beautiful city in the northern part of Denmark and what many of you probably do not know is that Aalborg actually scored “Europe’s happiest city” in a recent survey by the European Commission. It is now possible to register for the workshop and if you are quick and register before July, you get it all for only 200€. That is, three days of workshop, including lunches and a social event with dinner on Thursday evening. There are plenty of good reasons to attend the workshop. In addition to the many exciting contributed talks and posters that we are now reviewing, we have an impressive line-up of 9 invited keynote speakers! I will be presenting what the speakers have in store for you here on this blog in the coming days. ### international Traveling Workshop on Interactions between Sparse models and Technology international Traveling Workshop on Interactions between Sparse models and Technology On the 24th to 26th of August 2016, we are organising a workshop called international Traveling Workshop on Interactions between Sparse models and Technology (iTWIST). iTWIST is a biennial workshop organised by a cross-European committee of researchers and academics on theory and applications of sparse models in signal processing and related areas. The workshop has so far taken place in Marseille, France in 2012 and in Namur, Belgium in 2014. I was very excited to learn last fall that the organising committee of the previous two instalments of the workshop had the confidence to let Morten Nielsen and me organise the workshop in Aalborg (Denmark) in 2016. ## Themes This year, the workshop continues many of the themes from the first two years and adds a few new: • Sparsity-driven data sensing and processing (e.g., optics, computer vision, genomics, biomedical, digital communication, channel estimation, astronomy) • Application of sparse models in non-convex/non-linear inverse problems (e.g., phase retrieval, blind deconvolution, self calibration) • Approximate probabilistic inference for sparse problems • Sparse machine learning and inference • “Blind” inverse problems and dictionary learning • Optimization for sparse modelling • Information theory, geometry and randomness • Sparsity? What’s next? • Discrete-valued signals • Union of low-dimensional spaces, • Cosparsity, mixed/group norm, model-based, low-complexity models, … • Matrix/manifold sensing/processing (graph, low-rank approximation, …) • Complexity/accuracy tradeoffs in numerical methods/optimization • Electronic/optical compressive sensors (hardware) I would like to point out here, as Igor Carron mentioned recently, that HW designs are also very welcome at the workshop – it is not just theory and thought experiments. We are very interested in getting a good mix between theoretical aspects and applications of sparsity and related techniques. ## Keynote Speakers I am very excited to be able to present a range of IMO very impressive keynote speakers covering a wide range of themes: • Lieven Vandenberghe – University of California, Los Angeles – homepage • Gerhard Wunder – TU Berlin & Fraunhofer Institute – homepage • Holger Rauhut – RWTH Aachen – homepage • Petros Boufounos – Mitsubishi Electric Research Labs – homepage • Florent Krzakala and Eric Tramel – ENS Paris – homepage • Phil Schniter – Ohio State University – homepage • Karin Schnass – University of Innsbruck – homepage • Rachel Ward – University of Texas at Austinhomepage • Bogdan Roman – University of Cambridge – homepage The rest of the workshop is open to contributions from the research community. Please send your papers (in the form of 2-page extended abstracts – see details here). Your research can be presented as an oral presentation or a poster. If you prefer, you can state your preference (paper or poster) during the submission process, but we cannot guarentee that we can honour your request and reserve the right to assign papers to either category in order to put together a coherent programme. Please note that we consider oral and poster presentations equally important – poster presentations will not be stowed away in a dusty corner during coffee breaks but will have one or more dedicated slots in the programme! ## Open Science In order to support open science, we strongly encourage authors to publish any code or data accompanying your paper in a publicly accessible repository, such as GitHub, Figshare, Zenodo etc. The proceedings of the workshop will be published in arXiv as well as SJS in order to make the papers openly accessible and encourage post-publication discussion. ### Compressed Sensing – and more – in Python Compressed Sensing – and more – in Python The availability of compressed sensing reconstruction algorithms for Python has so far been quite scarce. A new software package improves on this situation. The package PyUnLocBox from the LTS2 lab at EPFL is a convex optimisation toolbox using proximal splitting methods. It can, among other things, be used to solve the regularised version of the LASSO/BPDN optimisation problem used for reconstruction in compressed sensing: $\underset{x}{\mathrm{argmin}} \| Ax - y \|_2 + \tau \| x \|_1$ Heard through Pierre Vandergheynst. I have yet to find out if it also solves the constrained version. Update: Pierre Vandergheynst informed me that the package does not yet solve the constrained version of the above optimisation problem, but it is coming: $\underset{x}{\mathrm{argmin}} \quad \| x \|_1 \\ \text{s.t.} \quad \| Ax - y \|_2 < \epsilon$ Forest Vista seeking principles Re-engineering Peer Review Pandelis Perakakis, PhD experience... learn... grow chorasimilarity computing with space | open notebook PEER REVIEW WATCH Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today. Short, Fat Matrices a research blog by Dustin G. Mixon www.rockyourpaper.org Discover and manage research articles... Science Publishing Laboratory Experiments in scientific publishing Open Access Button Push Button. Get Research. Make Progress. Le Petit Chercheur Illustré Yet Another Signal Processing (and Applied Math) blog
2018-08-15 09:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31381773948669434, "perplexity": 1647.2693783559232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210040.24/warc/CC-MAIN-20180815083101-20180815103101-00005.warc.gz"}
https://cs.stackexchange.com/questions/45722/sublinear-search-of-variables-in-a-term
# Sublinear search of variables in a term Suppose we have a forest. The leaves have labels. Let's suppose all labels are natural numbers. We would like the forest to support two operations: • rebasing that replaces a leaf on first tree with the second tree. void rebase(node leaf, node root) • leaf search by the (possibly internal) node of any tree and label we'd like to find in a subtree with a root in that node. It should return either a reference to the leaf that has that label or a failure. In case there are two leaves with the same label, it should return any of them. optional<node> search(node root, int label) Which data structure could help implementing both of these operations in sublinear (i.e. O(1) or O(log n)) time? Is there a popular name for this problem? Currently I'm using an O(N) search that visits via DFS every node of the subtree and compares labels to the given one. Pretty obvious solution would be to have a double-linked list of leaves for each tree in a forest. Leaves should be listed in a left-to-right traversal order. Every node would store two pointers that mark beginning and the end of the part of that DLL that contains all leaves of the subtree with the root in that node. In such a way rebase could insert one DLL into the middle of the other, and search would need to traverse only leaves of the subtree. But that doesn't decrease complexity of search from linear, because it requires O(N) leaf visits and label comparisons. Another solution is to have a persistent binary search tree on each node of the original tree. Rebasing would require O(h n) operations to rebuild search trees on all the nodes from the insertion point to the root, and search would be O(log n). But this approach seems overcomplicated and requires garbage collector to implement properly. • 1. What are your thoughts? What research have you done? What approaches have you considered? We expect you to do a significant amount of research before asking, and to show us in the question what you've done. See cs.stackexchange.com/help/how-to-ask. 2. Can you edit your question to specify more clearly what you mean by "leaf search"? I can't understand your sentence. (Also, what does it have to do with leaves?) – D.W. Aug 30 '15 at 21:49 • @D.W. I've skipped research part because I hadn't achieved anything interesting on this task. Fixed. – polkovnikov.ph Aug 30 '15 at 22:11 • 1. I still can't understand what the sentence about "leaf search" means. What is "a subtree with a root in that node"? How does the root argument affect what search() should return? The description "It should return..." makes it sound like the desired return value depends only on label but not on root. 2. Have you looked at balanced search trees, hash tables, etc.? Each of them should support lookup, deletion, and insertion in $O(\lg n)$ time, which seems like it might be enough to implement your operations (depending on what you mean by leaf search). – D.W. Aug 31 '15 at 4:33
2019-08-20 01:28:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5047558546066284, "perplexity": 1002.6029340830087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00071.warc.gz"}
https://www.nature.com/articles/s41598-017-16751-1?error=cookies_not_supported&code=5d10664a-f535-4e77-986a-e3b12bab3e87
Article | Open | Published: # Fracture toughness and structural evolution in the TiAlN system upon annealing ## Abstract Hard coatings used to protect engineering components from external loads and harsh environments should ideally be strong and tough. Here we study the fracture toughness, K IC, of Ti1−xAlxN upon annealing by employing micro-fracture experiments on freestanding films. We found that K IC increases by about 11% when annealing the samples at 900 °C, because the decomposition of the supersaturated matrix leads to the formation of nanometer-sized domains, precipitation of hexagonal-structured B4 AlN (with their significantly larger specific volume), formation of stacking faults, and nano-twins. In contrast, for TiN, where no decomposition processes and formation of nanometer-sized domains can be initiated by an annealing treatment, the fracture toughness K IC remains roughly constant when annealed above the film deposition temperature. As the increase in K IC found for Ti1−xAlxN upon annealing is within statistical errors, we carried out complementary cube corner nanoindentation experiments, which clearly show reduced (or even impeded) crack formation for annealed Ti1−xAlxN as compared with their as-deposited counterpart. The ability of Ti1−xAlxN to maintain and even increase the fracture toughness up to high temperatures in combination with the concomitant age hardening effects and excellent oxidation resistance contributes to the success of this type of coatings. ## Introduction Hard coatings are applied to protect tool and component surfaces as well as entire devices in harsh environments and/or demanding application conditions. The coatings are usually ceramic materials, which are known for their beneficial properties such as high hardness and wear resistance, high melting temperatures, high-temperature strength, chemical inertness and oxidation resistance. However, these materials often possess a relatively low (fracture) toughness. A certain degree of toughness, however, is crucial for the reliability and safe operation of critical components. Various strategies have been applied to enhance the fracture toughness of bulk materials1 and hard coatings2,3. Since the pioneer works in the nineteen eighties4,5, Ti1−xAlxN has evolved to one of the most widely used and industrial relevant hard coating systems6. Age hardening effects are (besides enhanced oxidation resistance7 and resistance against wear4,5 compared to TiN) considered to be the major basis for its industrial success. At temperatures typical for cutting tools operation, supersaturated face-centered cubic Ti1−xAlxN isostructurally decomposes into nanometer-sized AlN-rich and TiN-rich domains. This is due to spinodal decomposition causing self-hardening effects8,9,10,11. Nonetheless, the influence of its characteristic thermally activated decomposition and the resulting self-organized nanostructure on the fracture toughness is yet to be studied. The present work revolves around the hypothesis that (besides the well-known self-hardening effects9) also the fracture toughness of Ti1−xAlxN coatings increases at elevated temperatures. Potential fracture toughness enhancing mechanisms in the self-organized nanostructure of B1 AlN-rich and TiN-rich domains8,9,10,11 are based on: coherency strains, spatially fluctuating elastic properties, and stress-induced phase transformation toughening from cubic to hexagonal AlN phases under volume expansion at the tip of a propagating crack similar to Yttrium-stabilized zirconia bulk ceramics12 or Zr-Al-N based nanoscale multilayers13. We will also see that the B4 AlN phase formation can play a key role for the fracture toughness evolution of Ti1−xAlxN. By using high-resolution transmission electron microscopy (HRTEM), we observed severely distorted B4 AlN with multiple stacking faults and indications of nano-twins. Twinning represents a mechanism capable of simultaneously enhancing strength and ductility in materials14. We carried out cantilever deflection (and cube corner nanoindentation experiments) to study the evolution of the fracture toughness of up to 1000 °C ex-situ vacuum annealed Ti1−xAlxN free-standing films and correlated them with the film structural evolution and the mechanical properties, hardness (H) and Young’s modulus (E), obtained from independent experiments. The mechanical properties were corroborated with HRTEM investigations to give atomic scale insights into the thermally decomposed Ti1−xAlxN structure. TiN coatings are used as a benchmark, as no decomposition processes are active that would lead to the formation of new nm-sized domains. ## Results ### Structural evolution Energy dispersive X-ray spectroscopy (EDXS) analysis rendered a chemical composition of Ti0.40Al0.60N. Due to the specific sputter condition of the Ti0.5Al0.5 compound target, the coatings prepared are slightly richer in Al than the target for the deposition parameters used15. The oxygen content within the coatings is below 1 at.%, as obtained by elastic recoil detection analysis of coatings prepared under comparable conditions15. Figure 1a shows the X-ray diffraction patterns of our Ti0.40Al0.60N films grown onto Al2O3 $$(1\bar{1}02)$$ substrates after vacuum annealing at different annealing temperatures, T a, for 10 min. Up to 750 °C, Ti0.40Al0.60N maintains its single phase face-centered cubic (rock-salt-type, B1) structure. The slight peak shift to higher 2θ angles and decrease in peak broadening indicate recovery of built-in structural point and line defects, which results in a lattice parameter decrease in the films. The peak shift to higher 2θ angles also suggests B1 AlN formation (its lattice parameter is smaller as compared to Ti0.40Al0.60N16, hence the diffraction peaks occur at higher 2θ angles). Between 850 and 1000 °C, an asymmetric peak broadening is observed, which indicates isostructural formation of cubic AlN- and TiN-rich domains. Especially, the right shoulder in vicinity of the cubic (200) peak – indicative for cubic AlN formation – is clearly visible and becomes more pronounced with increasing temperature. Hexagonal (wurtzite-type, B4 structured) AlN first emerges at 850 °C and its phase fraction increases with increasing temperature. The shift of the XRD reflections from the major cubic structured Ti1−xAlxN matrix phase to lower 2θ angles is a result of decreasing Al content (hence, the XRD peaks shift towards the lower 2θ position of TiN). On the other hand, compressive stresses, e.g., induced by the B1 to B4 phase transformation of AlN17 under volume expansion of ~26%16 or by thermal stresses, contribute to the peak shift to lower 2θ angles (for the thermal expansion coefficients, α, holds as αB1-(Ti,Al)N > αAl2O3 > αB4-AlN, see refs18,19,20). The structural evolution of single phase cubic structured TiN, Fig. 1b, is dominated by recovery of built-in structural point and line defects and results in smaller lattice parameters. Accordingly, the peaks are shifted to larger 2θ angles and become sharper with increasing temperature. Both, Ti0.40Al0.60N and TiN crystallized in a polycrystalline structure. (For a thorough analysis of the crystallographic texture further investigations would be necessary, e.g., pole figure measurements based on X-ray diffraction). ### TEM/HRTEM study TEM studies were performed on the sample annealed at 900 °C using cross-section samples. A low-magnified image presents an overview of the coating morphology (Fig. 2a), where columnar grains are clearly visible. At this annealing temperature, AlN based hexagonal phases emerge. An atomic resolution TEM image of one portion of grain interfaces are shown in Fig. 2b, the corresponding fast Fourier transforms (FFTs) are seen on the right-hand side. Analysis indicates that a cubic structured Ti1−xAlxN grain is oriented along the [001] direction while the adjacent hexagonal AlN grain is close to $$[21\bar{1}0]$$ direction, with an orientation relationship of Ti1−xAlxN $$(2\bar{2}0)$$//AlN (0001). This implies that hexagonal AlN (0001) grows on Ti1−xAlxN $$(2\bar{2}0)$$ planes with a small misfit of δ = $$\frac{{d}_{220}^{TiAlN\,}-\,{d}_{1210}^{AlN}\,}{{d}_{220}^{TiAlN}}\,\approx 5.7\, \% \,\,\,$$along this direction. The corresponding FFTs clearly signify the plane relationship between these two phases. This has also been proved by tilting the grains to another orientation. Figure 2c shows one hexagonal AlN grain, grown in between two cubic Ti1−xAlxN grains, viewed along the $$[11\bar{2}0]$$ direction while Ti1−xAlxN is off [001] zone axis, as illustrated in the corresponding FFTs (inserted). Here, only a series of planes appear. The orientation relation is Ti1−xAlxN (220)//AlN $$(1\bar{1}00)$$ for this case. It is further noted that the planes in hexagonal AlN are severely distorted or inclined which means that internal stress is strongly involved during the phase transformation. There are numerous defects present in the hexagonal AlN regions, for instance stacking faults and nano-twins marked exemplarily with white arrows in Fig. 2c. In some cases, the AlN phase seems to form in the Ti1−xAlxN matrix, i.e. Fig. 2b, since the FFT from AlN contains Ti1−xAlxN spots. However, hexagonal AlN frequently forms at the grain boundary as demonstrated in Fig. 2c, in which the hexagonal AlN and Ti1−xAlxN phases are separated and formed in between two Ti1−xAlxN grains. Consequently, the AlN phase transformation (from cubic to hexagonal) can take place in the matrix and also at the grain boundaries, in agreement with earlier studies21. ### Nanoindentation The mechanical properties as a function of annealing temperature are presented in Fig. 3 and are in line with previous studies reported in literature9. The indentation hardness (H), Fig. 3a, increases for Ti0.40Al0.60N (red curves) by ~9% from 34 ± 1 GPa in the as-deposited state to 37 ± 2 GPa at 900 °C, before it decreases again down to 28 ± 2 GPa at 1000 °C. The Young’s modulus (E), Fig. 3b, shows a similar trend. Contrarily, the hardness of TiN (blue curves) steadily decreases with increasing T a, from 32 ± 1 GPa at room temperature to 27 ± 1 GPa at 850 °C, (Fig. 3a), while the Young’s modulus marginally decreases (Fig. 3b). The chosen deposition conditions used in the present study resulted in coatings with excellent mechanical properties in the as-deposited state. In general, age hardening effects are more pronounced for softer coatings, e.g., a relative increase of ~25% was observed for Ti1−xAlxN with an as-deposited hardness of ‘only’ ~26 GPa21. The elastic strain to failure22,23,24,25,26, (H/E), which is often used to qualitatively rate materials for their failure resistance, suggests superior properties of Ti0.40Al0.60N in comparison with TiN, Fig. 3c. While (H/E) values for Ti0.40Al0.60N are maintained up to high temperatures and even increase, the (H/E) ratio of TiN is below that of Ti0.40Al0.60N in the as-deposited state and shows a steady decrease upon annealing above the deposition temperature. A similar trend can be observed for the plastic deformation resistance factor22,25,26, (H 3/E 2), shown in Fig. 3d, indicating superior wear resistance of Ti0.40Al0.60N in comparison with TiN. ### Micromechanical Testing Representative recorded force–deflection curves, given in Fig. 4a, show that Ti0.40Al0.60N and TiN deform in a linear manner, elastically during loading by a PicoIndenter until failure. No indications of plastic deformation are seen. (Please note that the actual cantilever dimensions, lever arms, and pre-notch depths differ from sample to sample. Hence, Fig. 4a, does not allow direct ranking of the samples with respect to their stiffness and fracture toughness). Figure 4b shows a typical free-standing cantilever. The substrate material had been removed by focused ion beam milling to avoid the influence of residual stresses and substrate interference. Scanning electron micrographs of the post-mortem fracture cross-sections, Fig. 4c,d, do not show discernible changes of the film morphology upon annealing. However, the structure of TiN (Fig. 4d) appears more columnar-grained in comparison with Ti0.40Al0.60N (Fig. 4c). The K IC values, as calculated from the maximum load at failure, the actual pre-notch depth, and cantilever dimensions using a linear elastic fracture mechanics approach27, are presented in Fig. 5. The data suggest an increase in K IC from 2.7 ± 0.3 MPa∙√m in the as-deposited state to 3.0 ± 0.01 MPa∙√m at 900 °C followed by a decreases to 2.8 ± 0.4 MPa∙√m at 1000 °C (red curve). The relative increase of ~11% in fracture toughness of Ti0.40Al0.60N is similar to the relative increase in hardness of ~9%. Please note, however, that strictly speaking the increase in fracture toughness is within statistical error. Interestingly, the pronounced decrease in hardness at 1000 °C due to wurtzite AlN formation is not observed for K IC, which —in agreement with the H/E criterion— only slightly decreases. Lower K IC values of ~1.9 MPa∙√m are found for as-deposited and annealed TiN (blue curve). To qualitatively proof that K IC increases upon annealing, we carried out independent cube corner nanoindenation experiments on coated Al2O3 $$(1\bar{1}02)$$ substrates. Scanning electron microscopy images of the indents show aggravated (or even impeded) crack formation for annealed Ti1−xAlxN samples as compared to the as-deposited counterpart, see Fig. 6. Please note that in the cube corner experiment, residual stresses (e.g., massive compressive residual stresses forming due to the cubic to wurtzite AlN phase transformation under volumes expansion) and the underlying substrate can influence the formation of cracks. ## Discussion The structural evolution of supersaturated cubic Ti1−xAlxN upon annealing has been experimentally proven in the literature by atom probe tomography11,28, small angle X-ray scattering29, transmission electron microscopy30, and described by phase field simulations30: During the early stage, very few nanometer-sized B1 AlN- and TiN-rich domains form in a coherent manner (that is, the crystallographic orientation of the domains correspond to that of the Ti1−xAlxN parent grain). With progressive annealing time, the domains gain in size and the compositional variations become more pronounced, so that the modulation amplitudes (Ti- and Al-rich) become larger. If the annealing is continued for too long or performed at higher temperatures, coherency strains are relieved by misfit dislocations. Eventually, cubic structured AlN-rich domains transform into the softer but thermodynamically stable (first (semi) coherent then incoherent) hexagonal AlN. The cubic to hexagonal AlN phase transformation is associated with a large volume expansion of ~26%16. Thermally-induced hardening effects in the TiAlN system have been attributed to coherency strains9. Coherency strains hinder the movement of dislocations31, as it is more difficult for dislocations to passage through a strained than a homogenous lattice. In addition, the coherent domains differ in their elastic properties due to the strong compositional dependent elastic anisotropy of Ti1−xAlxN32, which also hinders the dislocation motion and contributes to the hardness enhancement32. The structural evolution observed in the present study is in line with the literature reports mentioned above. Additionally, we have evidenced severely distorted or inclined lattice planes and numerous defects (including stacking faults) in the hexagonal AlN phase by HRTEM investigations (Fig. 2). This could explain why the measured hardness at 900 °C is relatively high despite the presence of the “soft” hexagonal AlN phase, which is usually reported to deteriorate the hardness. We have been able to show that besides age hardening effects, the fracture toughness increases upon annealing. Both properties show a similar relative increase of around 10% as compared to the as-deposited state and peak at the same temperature of 900 °C. This suggests that similar microstructural characteristics are responsible for the enhancement of the mechanical properties. We could demonstrate in an earlier study3 that a coherent nanostructure composed of alternating materials has the potential to enhance the fracture toughness for a certain bilayer period of a few nanometers. In the superlattice films, also coherency strains 33,34 and variations in the elastic properties are present. It should be mentioned, however, that in contrast to the hardness, the fracture toughness is not primarily governed by the hindrance of dislocation motion: the load-displacement data collected during the cantilever deflection experiments (Fig. 4a) suggest a linear elastic behavior until failure with no indications of plastic deformation. In agreement with literature reports21, we found that cubic AlN forms preferentially at high diffusivity paths such as grain boundaries. If grain boundaries represent the weakest link where cracks preferentially propagate35, grain boundary reinforcement 36 has the potential to effectively hinder the crack propagation. Another important mechanism for increased fracture toughness is phase transformation toughening, which is omnipresent in partially stabilized zirconia bulk ceramics12, for example. For Ti1−xAlxN coatings, the spinodally formed cubic structured AlN-rich domains represent the phase with the ability of a martensitic-like phase transformation from the metastable cubic structure to the stable wurtzite-type (w) variant. The associated volume expansion of ~26%16 slows down or closes advancing cracks, leading to a significant K IC increase. Therefore, the evolution of K IC with T a of our Ti0.40Al0.60N coatings is not proportional to that of H with T a, especially at temperatures above 850 °C. The hardness significantly decreases for an increase of T a from 950 to 1000 °C, as also the w-AlN formation significantly increases (please compare Figs. 1 and 3a), but at the same time, the fracture toughness K IC only slightly decreases. The K IC value of 2.8 ± 0.4 MPa∙√m after annealing at 1000 °C, is still above that of the as deposited state (with K IC = 2.7 ± 0.3 MPa∙√m), whereas the hardness with H = 28 ± 2 GPa after annealing at 1000 °C is significantly below the as deposited value of 34 ± 1 GPa. Hence, effective other mechanisms are present in this type of material, especially when decomposition of the supersaturated matrix phase occurs and w-AlN based phases are able to form. Note that in the chosen free-standing cantilever setup macro-stresses are relieved and thus do not contribute to the observed toughness enhancement. However, due to the extensive difference in the molar volume between cubic and wurtzite AlN, the thermally-induced formation of hexagonal AlN results in pronounced compressive stresses17,37 in the application where the coatings are firmly attached to a substrate/engineering component. Compressive stresses result in apparent toughening of Ti1−xAlxN, as the coating can withstand higher tensile stresses before cracks are initiated (the compressive stresses have to be overcome first before crack formation). The effect of compressive stresses on the fracture toughness is supposed to be much more pronounced than its influence on the hardness. This is why, in real application, the K IC increase upon annealing is expected to be significantly larger than the K IC enhancement found from free-standing micro-cantilever bending tests. This is reflected in the aggravated crack formation observed in the cube corner experiments, see Fig. 6. As the ‘inherent’ fracture toughness enhancing effects are strongly connected with the spinodal decomposition, we anticipate that alloying38,39,40,41 and other concepts to modify the spinodal decomposition characteristics (formation of coherent cubic AlN domains at lower temperatures but delayed formation of the thermodynamically stable phase wurtzite AlN, different shape and size of cubic AlN domains) are applicable to optimize the self-toughening behavior. In general, alloying has the potential to enhance the inherent toughness by modifying the electronic structure and bonding characteristics42,43. The peak in hardness and fracture toughness at 900 °C corresponds to spinodally decomposed TiAlN with fractions of hexagonal AlN as indicated by XRD (Fig. 1) and TEM (Fig. 2). The severely distorted hexagonal AlN with multiple stacking faults suggests that also nano-twinning might become a relevant mechanism. The presence of twins impedes dislocation motion and induces strengthening, but multiple twinning systems can also enhance ductility by acting as a carrier of plasticity14. Based on our results we propose that the additional functionality of Ti1−xAlxN, i.e. the self-toughening ability at temperatures typical for many various applications, contributes to the outstanding performance of Ti1−xAlxN coatings in e.g., dry or high speed cutting. ## Methods ### Sample preparation Ti0.40Al0.60N films were deposited in a lab-scale magnetron sputter system (a modified Leybold Heraeus Z400) equipped with a 3 inch powder-metallurgical processed Ti0.50Al0.50 compound target. Polished single crystalline Al2O3 $$(1\bar{1}02)$$ platelets (10 × 10 × 0.53 mm3) were chosen as substrate materials due to their high thermal stability, inertness and to avoid interdiffusion between film and substrate materials upon annealing up to 1000 °C. Before the deposition, the substrates (ultrasonically pre-cleaned in aceton and ethanol) were heated within the deposition chamber to 500 °C, thermally cleaned for 20 min and sputter cleaned with Ar ions for 10 min. The deposition was performed at the same temperature in a mixed N2/Ar atmosphere with a gas flow ratio of 4 sccm/6 sccm and a constant total pressure of 0.35 Pa by setting the target current to 1 A (DC) while applying a DC bias voltage of −50 V to the substrates. The films were grown to a thickness of about 1.8 µm with an average deposition rate of about 75 nm/min. The base pressure was below 5·10–6 mbar. TiN coatings of about 1.2 µm were synthesized by powering a 3 inch Ti cathode with 500 W within an N2/Ar gas mixture (flow ratio of 3 sccm/7 sccm, constant total pressure of 0.4 Pa) and applying a bias voltage of −60 V to the substrates. The deposition rate was about 13 nm/min. Energy dispersive X-ray spectroscopy (EDXS) measurements of the films were performed with an EDAX Sapphire EDS detector inside a Philips XL-30 scanning electron microscope. Thin film standards characterized by elastic recoil detection analyses were used to calibrate the EDX measurements. The films on Al2O3 were annealed in a vacuum furnace (Centorr LF22-2000, base pressure <3·10−3 Pa) at different maximum temperatures (T a) between 750 and 1000 °C using a heating rate of 20 °C min−1 and passive cooling. At T a, the temperature was kept constant for 10 min. Structural investigations of coated Al2O3 substrates were performed by X-ray diffraction in symmetric Bragg-Brentano geometry using a PANalytical X’Pert Pro MPD diffractometer (Cu-Kα radiation). Cross-sectional TEM specimens were prepared using a standard TEM sample preparation approach including cutting, gluing, grinding and dimpling. Finally, Ar ion milling was carried out. A JEOL 2100 F field emission microscope (200 kV) equipped with an image-side CS-corrector with a resolution of 1.2 Å at 200 kV was used. The aberration coefficients were set to be sufficiently small, i.e. CS ~ 10.0 μm. The HRTEM images were taken under a slight over-focus. The HRTEM images were carefully analysed using Digital Micrograph software. ### Micromechanical testing The mechanical properties, hardness and indentation modulus, were measured using a UMIS nanoindenter equipped with a Berkovich tip. At least 30 indents per sample, with increasing loads from 3 to 45 mN were performed. The recorded data were evaluated using the Oliver and Pharr method44. To minimize substrate interference, only indents with indentation depths below 10% of the coating thickness were taken into account. The cube corner experiments were carried with the UMIS nanoindenter using a peak indentation load of 150 mN. The high load needed to create cracks resulted in indentation depths of about 1.3 µm in the cube corner experiment. The fracture toughness was determined from micromechanical cantilever bending tests of free-standing film material. As-deposited and annealed coated Al2O3 samples were broken and their cross-sections carefully polished. The substrate material was removed by Focused Ion Beam (FIB) milling perpendicular to the film growth direction using a FEI Quanta 200 3D DBFIB work station. Then the sample holder was tilted 90° and cantilevers were milled perpendicular to the film surface. The cantilever dimensions of t × t × 6t μm3, with t denoting the film thickness, were chosen based on guidelines reported in Brinckmann et al.45 For the final milling step, the ion beam current was reduced to 500 pA, the initial notch was milled with 50 pA. To circumvent the problem of a finite root radii on the fracture toughness measurements, bridged notches according to Matoy et al.27 were used (the notch length was chosen to be 0.75t). The micromechanical experiments were performed inside a scanning electron microscope (FEI Quanta 200 FEGSEM) using a PicoIndenter (Hysitron PI85) equipped with a spherical diamond tip with a nominal tip radius of 1 μm. The micro-cantilever beams were loaded displacement-controlled with 5 nm/s with the loading axis perpendicular to the film surface. Per annealing temperature at least 3 tests were conducted. The fracture toughness, K IC, was determined using linear elastic fracture mechanics according to the formula given in ref.27: $${K}_{IC}=\frac{{F}_{max}L}{B{W}^{3/2}}f\,(\frac{a}{w})$$ (1) with $$f\,(\frac{a}{W})=1.46+24.36\ast (\frac{a}{W})-47.21\ast {(\frac{a}{W})}^{2}+75.18\ast {(\frac{a}{W})}^{3}$$. In the equation, $${F}_{max}$$ denotes the maximum load applied, L the lever arm (distance between the notch and the position of loading), B the width of the cantilever, W the thickness of the cantilever, and a the initial crack length (measured from the post mortem fracture cross-sections). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Ritchie, R. O. The conflicts between strength and toughness. Nat. Mater. 10, 817 (2011). 2. 2. Zhang, S., Sun, D., Fu, Y. & Du, H. Toughening of hard nanostructural thin films: a critical review. Surf. Coat. Tech. 198, 2 (2005). 3. 3. Hahn, R. et al. Superlattice effect for enhanced fracture toughness of hard coatings. Scripta Mater. 124, 67 (2016). 4. 4. Knotek, O., Böhmer, M. & Leyendecker, T. On structure and properties of sputtered Ti and Al based hard compound films. J. Vac. Sci. Technol. A4, 2695 (1986). 5. 5. Münz, W.-D. Titanium aluminum nitride films: A new alternative to TiN coatings. J. Vac. Sci. Technol. A4, 2717 (1986). 6. 6. PalDey, S. & Deevi, S. C. Single layer and multilayer wear resistant coatings of (Ti,Al)N: a review. Mater. Sci. Eng. A342, 58 (2003). 7. 7. McIntyre, D., Greene, J. E., Håkansson, G., Sundgren, J. ‐E. & Münz, W. ‐D. Oxidation of metastable single‐phase polycrystalline Ti0.5Al0.5N films: Kinetics and mechanisms. J. Appl. Phys. 67, 1542 (1990). 8. 8. Hörling, A., Hultman, L., Odén, M., Sjölén, J. & Karlsson, L. Thermal stability of arc evaporated high aluminum-content Ti1−xAlxN thin films. J. Vac. Sci. Technol. A20, 1815 (2002). 9. 9. Mayrhofer, P. H. et al. Self-organized nanostructures in the Ti–Al–N system. Appl. Phys. Lett. 83, 2049 (2003). 10. 10. Alling, B. et al. Mixing and decomposition thermodynamics of c- Ti1−xAlxN from first-principles calculations. Phys. Rev. B 75, 045123 (2007). 11. 11. Rachbauer, R., Stergar, E., Massl, S., Moser, M. & Mayrhofer, P. H. Three-dimensional atom probe investigations of Ti–Al–N thin films. Scripta Mater. 61, 725 (2009). 12. 12. Kelly, P. M. & Rose, L. R. F. The martensitic transformation in ceramics – its role in transformation toughening. Prog. Mater. Sci. 47, 463 (2002). 13. 13. Yalamanchili, K. et al. Tuning hardness and fracture resistance of ZrN/Zr0.63Al0.37N nanoscale multilayers by stress-induced transformation toughening. Acta Mater. 89, 22 (2015). 14. 14. Zhang, Z. et al. Dislocation mechanisms and 3D twin architectures generate exceptional strength-ductility-toughness combination in CrCoNi medium-entropy alloy. Nat. Commun. 8, 14390 (2017). 15. 15. Riedl, H. et al. Influence of oxygen impurities on growth morphology, structure and mechanical properties of Ti–Al–N thin films. Thin Solid Films 603, 39 (2016). 16. 16. Mayrhofer, P. H., Music, D. & Schneider, J. M. Influence of the Al distribution on the structure, elastic properties, and phase stability of supersaturated Ti1−xAlxN. J. Appl. Phys. 100, 094906 (2006). 17. 17. Bartosik, M. et al. Lateral gradients of phases, residual stress and hardness in a laser heated Ti0.52Al0.48N coating on hard metal. Surf. Coat. Tech. 206, 4502 (2012). 18. 18. Bartosik, M. et al. Thermal expansion of rock-salt cubic AlN. Appl. Phys. Lett. 107, 071602 (2016). 19. 19. Freund, L. B., Suresh, S. Thin Film Materials: Stress, Defect Formation, and Surface Evolution. Cambridge University Press, Cambridge, United Kingdom (2003). 20. 20. Bartosik, M. et al. Thermal expansion of Ti-Al-N and Cr-Al-N coatings. Scripta Mater. 127, 182 (2017). 21. 21. Rachbauer, R. et al. Decomposition pathways in age hardening of Ti-Al-N films. J. Appl. Phys. 110, 023515 (2011). 22. 22. Musil, J. Advanced Hard Coatings with Enhanced Toughness and Resistance to Cracking, in: Zhang, S. (Ed.), Thin Films and Coatings: Toughening and Toughness Characterization, CRC Press (Taylor & Francis Group), Boca Raton, 377–464. 23. 23. Leyland, A. & Matthews, A. On the significance of the H/E ratio in wear control: a nanocomposite coating approach to optimised tribological behaviour. Wear 246, 1 (2000). 24. 24. Leyland, A. & Matthews, A. Design criteria for wear-resistant nanostructured and glassy-metal coatings. Surf. Coat. Tech. 177, 317 (2004). 25. 25. Musil, J. & Jirout, M. Toughness of hard nanostructured ceramic thin films. Surf. Coat. Tech. 201, 5148 (2007). 26. 26. Matthews, A. & Leyland, A. Materials Related Aspects of Nanostructured Tribological Coatings, SVC Bulletin, Spring 40 (2009). 27. 27. Matoy, K. et al. A comparative micro-cantilever study of the mechanical behavior of silicon based passivation films. Thin Solid Films 518, 247 (2009). 28. 28. Johnson, L. J. S., Thuvander, M., Stiller, K., Odén, M. & Hultman, L. Spinodal decomposition of Ti0.33Al0.67N thin films studied by atom probe tomography. Thin Solid Films 520, 4362 (2012). 29. 29. Odén, M. et al. In situ small-angle x-ray scattering study of nanostructure evolution during decomposition of arc evaporated TiAlN coatings. Appl. Phy. Lett. 94, 053114 (2009). 30. 30. Knutsson, A. et al. Microstructure evolution during the isostructural decomposition of TiAlN—A combined in-situ small angle x-ray scattering and phase field study. J. Appl. Phys. 113, 213518 (2013). 31. 31. Cahn, J. W. Hardening by spinodal decomposition. Acta Metall. 11, 1275 (1963). 32. 32. Tasnádi, F. et al. Significant elastic anisotropy in Ti1−xAlxN alloys. Appl. Phys. Lett. 97, 231902 (2010). 33. 33. Zhang, Z. et al. Superlattice-induced oscillations of interplanar distances and strain effects in the CrN/AlN system. Phys. Rev. B 95, 155305 (2017). 34. 34. Gu, X., Zhang, Z., Bartosik, M., Mayrhofer, P. H. & Duan, H. P. Dislocation densities and alternating strain fields in CrN/AlN nanolayers. Thin Solid Films 638, 189 (2017). 35. 35. Watanabe, T. Grain boundary design for the control of intergranular fracture. Mater. Sci. Forum 46, 25 (1989). 36. 36. Li, Z. et al. Designing superhard, self-toughening CrAlN coatings through grain boundary engineering. Acta Mater. 60, 5735 (2012). 37. 37. Rogström, L. et al. Strain evolution during spinodal decomposition of TiAlN thin films. Thin Solid Films 520, 5542 (2012). 38. 38. Mayrhofer, P. H., Rachbauer, R., Holec, D., Rovere, F. & Schneider, J. M. Protective Transition Metal Nitride Coatings, in: S Hashmi (Ed.), Comprehensive Materials Processing, Elsevier, 2014, 355–388. 39. 39. Chen, L., Holec, D., Du, Y. & Mayrhofer, P. H. Influence of Zr on structure, mechanical and thermal properties of Ti–Al–N. Thin Solid Films 519, 5503 (2011). 40. 40. Rachbauer, R., Holec, D. & Mayrhofer, P. H. Increased thermal stability of Ti–Al–N thin films by Ta alloying. Surf. Coat. Tech. 211, 98 (2012). 41. 41. Rachbauer, R., Blutmager, A., Holec, D. & Mayrhofer, P. H. Effect of Hf on structure and age hardening of Ti–Al-N thin films. Surf. Coat. Tech. 206, 2667 (2012). 42. 42. Sangiovanni, D. G., Chirita, V. & Hultman, L. Toughness enhancement in TiAlN-based quarternary alloys. Thin Solid Films 520, 4080 (2012). 43. 43. Sangiovanni, D. G., Hultman, L., Chirita, V., Petrov, I. & Greene, J. E. Effects of phase stability, lattice ordering, and electron density on plastic deformation in cubic TiWN pseudobinary transition-metal nitride alloys. Acta Mater. 103, 823 (2016). 44. 44. Oliver, W. C. & Pharr, G. M. An improved technique for determining hardness and elastic modulus using load and displacement sensing indentation experiments. J. Mater. Res. 7, 1564 (1992). 45. 45. Brinckmann, S., Kirchlechner, C. & Dehm, G. Stress intensity factor dependence on anisotropy and geometry during micro-fracture experiments. Scripta Mater. 127, 76 (2017). ## Acknowledgements The financial support by the START Program (Y371) of the Austrian Science Fund (FWF) is highly acknowledged. The micromechanical experiments and XRD investigations were carried out at the facilities USTEM and XRC of TU Wien, Austria. We thank the Institute for Mechanics of Materials and Structures (TU Wien) for providing the PicoIndenter PI85. ## Author information ### Affiliations 1. #### Institute of Materials Science and Technology, TU Wien, A-1060, Vienna, Austria • M. Bartosik • , C. Rumeau • , R. Hahn •  & P. H. Mayrhofer 2. #### Erich Schmid Institute of Materials Science, Austrian Academy of Sciences, A-8700, Leoben, Austria • Z. L. Zhang ### Contributions M.B. has designed the research, contributed to all experiments, and prepared the manuscript. C.R. primarily carried out the film synthesis & characterization, and FIB cantilever preparation. R.H. assisted in the micromechanical experiments. Z.L.Z. performed the HRTEM investigations and wrote the TEM part. P.H.M. was involved in all discussions and contributed to the text formulations. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to M. Bartosik.
2018-10-15 10:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7183922529220581, "perplexity": 6618.4896730472965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509170.2/warc/CC-MAIN-20181015100606-20181015122106-00270.warc.gz"}
https://exeley.com/in_jour_smart_sensing_and_intelligent_systems/doi/10.21307/ijssis-2021-013
FBG sensors for seismic control and detection in extradosed bridges ## Publications / Export Citation / / / Text size: #### International Journal on Smart Sensing and Intelligent Systems Exeley Inc. (New York) Subject: Computational Science & Engineering, Engineering, Electrical & Electronic eISSN: 1178-5608 32 41 Visit(s) 0 Comment(s) 0 Share(s) SEARCH WITHIN CONTENT FIND ARTICLE Volume / Issue / page Archive Volume 15 (2022) Volume 14 (2021) Volume 13 (2020) Volume 12 (2019) Volume 11 (2018) Volume 10 (2017) Volume 9 (2016) Volume 8 (2015) Volume 7 (2014) Volume 6 (2013) Volume 5 (2012) Volume 4 (2011) Volume 3 (2010) Volume 2 (2009) Volume 1 (2008) Related articles VOLUME 14 , ISSUE 1 (Feb 2021) > List of articles ### FBG sensors for seismic control and detection in extradosed bridges Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 14, Issue 1, Pages 1-13, DOI: https://doi.org/10.21307/ijssis-2021-013 Received Date : 18-March-2019 / Published Online: 08-July-2021 ### ARTICLE #### ABSTRACT Robust fiber Bragg grating (FBG) sensors network to civil engineering structures is presented as real-time monitoring deviation against seismic effects. The network is based on FBG sensors. The base element is a special type of chirped FBG that is validated. The developed network is applied in one of the two towers of concrete and extradosed type of Rades-La Goulette Bridge in Tunisia that in aggressive environment, to enhance the installed conventional structural health monitoring system (SHMS). Precisely, tilt influences of seismic parameters are calculated. Test procedure and obtained results are discussed. ## Introduction Structural health monitoring (SHM) has become increasingly valuable in recent years and is starting to be widely used in the field of structural engineering applications (Rodrigues et al., 2010). However, the necessity to have the precision resolution in real-time way in monitoring applications requires the manufacturer to look for new technologies. The novel technology found is the fiber Bragg grating (Sun et al., 2007; Vorathin et al., 2019). The implementation of such technology at design stage of the civil engineering structures is a good practice to the Master of the work which provides detailed knowledge of the behavior of these structures dice their exploitation in real-time way. Actually, the fiber Bragg grating (FBG) technology has enormous progress to meet the requirements of industrial applications thanks to their advantages like great resolution with precision, high sensitivity, immunity from electrical and magnetic interference (Shen and Shen, 2008) and multiplexing (Kang et al., 2005). It mixes the optical sensing with the optical communication (Li et al., 2011). The FBG sensors can easily measure with proper installation: temperature, strain (Kerrouche et al., 2009), pressure, frequency, chemical, biological, biomedical, rate, flows, etc. But temperature and strain still the base of any FBG sensor. However, the design of the FBG uniform does not remain stable over time or at elevated stress. That is why we can have false results. To beat this difficulty, we proposed and developed a new design of FBG rather than the uniform type (Lu et al., 2008; Palaniappan et al., 2007). It is a special chirped FBG. In the first part of this work, we present the new design of the FBG sensor which is validated. It gives a linear response to the strain and/or to the temperature with acceptable sensitivity in compared with all recent publications in this field. This element is the base element of the robust FBG sensors network that is applied, in a demonstrative example, in civil engineering structure in the second part of this work as SHMS. The test structure is one of the two towers of the bridge body of Rades-La Goulette in Tunisia. This work is the first of its style in Tunisia. The developed network will be used as little system for SHM applications for real-time monitoring. And so, the ultimate goal of this research is to improve the role of the conventional monitoring system that is already installed in this structural engineering (Kang et al., 2005; Kerrouche et al., 2009; Li et al., 2011; Lu et al., 2008; Palaniappan et al., 2007; Piot, 2009; Rodrigues et al., 2010; Shen and Shen, 2008; Sun et al., 2007; Vorathin et al., 2019), and to strengthening detections more selective and precise. The system proposed has been tested under particular conditions, where the tower is stressed. This particular condition is the seismic problem, and for the first time their effect is calculated in civil engineering structure body using the technology FBG specially using our proposed SHMS since our location, the Mediterranean, in one of the locations that is subject to earthquake effects and their happening risks increase from one year to another in coastal areas. Therefore, the need for such a precise structural health monitoring system (SHMS) in all engineering structures like a bridge is mandatory. This little SHMS allows us to appreciate, continuous, the ability of the service of the tower when the bridge is completely installed, during his life, where we quantify the normal and exceptional vertical or horizontal tilt sustained by the structure body by an earthquake. Also this proposed SHMS able to indicate the potential and the degree of such effect. On the other hand, the first Bridge that used the FBG technology for its structural monitoring is the Beddington Trail in Canada (Raikar et al., 2011) where the manufacturers used 20 FBG sensors to measure the temperatures and strains. Therefore, our purpose is to validate an SHMS based on FBG technology in engineering structures like bridges, stadiums, and buildings, and the test procedure and results obtained are discussed in detail. The error range found by this technology does not exceed 10 to 12. In Addition, our little SHMS allows us, absolutely, to reduce the cost of the monitoring and maintenance. On the other hand, in such structure engineering applications, the exact study of utilized systems and FBG sensors locations has an essential role to have the desired efficiency and efficiency monitoring. ## Special chirped FBG sensor As a definition, the Bragg grating is a spectral filter that allows a part of the incident signal to be reflected and etched in the transmitted signal, see Figure 1. The central wavelength of this reflected and etched part is (Kang et al., 2005; Kerrouche et al., 2009; Li et al., 2011; Lu et al., 2008; Palaniappan et al., 2007; Piot, 2009; Raikar et al., 2011; Rodrigues et al., 2010; Shen and Shen, 2008; Sun et al., 2007; Vorathin et al., 2019): $.(1)λB=2neffΛ(z)$ (1) with λ B is the central wavelength of the spectral signal reflected and etched by the Bragg grating, n eff is the effective refractive index of the fiber core, and Λ(z) is the grating period. The last is a constant in uniform case. In this work we have developed a new type which is a special chirped Bragg grating where the grating period is defined with: $(2)Λ(z)={Λ0(1−cpz)for[0...z2]Λ0'(1+cpz)for[z2...z]$ (2) with Λ0 and Λ0′ are the nominal grating periods and c p is the chirp coefficient. It is defined in nm/cm. On the other hand, all the physical parameters of an FBG can be varied: profile of the effective refractive index, profile of the effective refractive index modulation, length, apodization, chirp coefficient, and whether the grating has a counterpropagating or copropagating coupling at a Bragg wavelength (Hill et al., 1997; Suresh and Tjin, 2005). For our proposed design of FBG, it watches a good performance. Figure 2 presents their reflected signal without the need for the apodized function. The reflected signal is calculated basis in matrix solution with the following parameters: Λ0 = 0.53 µm, Λ0′ = 0.5284 µm, c p = 1.5 nm/cm, n eff = 1.456, length of 4 mm, and average refractive index modulation equal to 2.5 × 10-4. And so we have λ B = 1.5424 µm. From Figure 2, we have the same shape of reflectivity if we compare it with reflectivity that is from the conventional shapes of FBG that are described in all research works (Kang et al., 2005; Li et al., 2011). Also, we have a reflectivity in the order 100%. In addition, our design presents robustness as the random fluctuations that are produced for different noises to the grating period, Λ(z), therefore automatically to the effective refractive index within a range ~ ±0.03%, which is a high value, of Λ(z) have not effect in any way the structure of the sensor design. In addition, if we exceed this value the fluctuations would destroy the period structure and divide the whole grating into lots of pieces, which results in interference among those pieces. So, we must be sure of the fluctuations values in the manufacturing process. However, our sensors design gives a large margin of security in the manufacturing process. On the other hand from the reflective spectrum of our FBG sensor, FWHM has a value of 0.226 nm which is very little value and can increase the multiplexing solution. So this proposed FBG is suitable for optical communication with grate long-term robustness. ##### Figure 1: Illustration of the functional principle of FBG sensors. ##### Figure 2: Reflective spectrum of FBG sensor. On the other hand, we can characterize the robustness of our design by calculation the coupling coefficient, k. Basis in several references (Aubin, 2009; Hill et al., 1997), we can express this coefficient as: $(3)K(z)=πδ¯neff(z)λB,zν$ (3) where λ B,z is the central wavelength of each uniform Bragg section in matrix solution, υ is a term of coherence which is taken equal to 1 and $δ¯neff(z)$ described the profile of the effective refractive index modulation, also for each uniform Bragg section. In our case, we use an SMF doped-germanium optical fiber. The radius of the fiber core is equal to 4.6 µm, that of the optic cladding is of the order of 62.5 µm and that of the mechanical protective cladding is 125 µm. This type of fiber and the parameters mentioned above are used in the SHMS that will be presented in next sections of this paper. In this part, we calculate the strain and temperature sensitivities in our FBG which acts as a sensor for monitoring applications. We remember that these sensitivities conduct to shift in Bragg wavelength, which is the basic function of an FBG sensor, as indicate the following expression (Kersey et al., 1997): $(4)ΔλB=2neffΛz((1‒[neff22(p12‒v[p11+p12])])ε+[α+dneffdTneff]ΔT)$ (4) with Pij are the Pockel’s coefficients, υ is the Poisson’s ratio, ε is the applied strain, α is the fiber linear thermal coefficient, and ΔT is the temperature change. In addition, we can make the last expression simpler as (Vorathin et al., 2019): $(5)ΔλBλB=(α+ξ)ΔT+(1‒ρ)ε$ (5) with α is the fiber linear thermal coefficient, ξ is the thermo-optic coefficient and $ρ=neff22(P12−v[P11+P12])$ is the effective photo-elastic coefficient (Kersey et al., 1997; Shen and Shen, 2008). This last has a numerical value; it is varied about 0.22 (Aubin, 2009) and 0.26 (Li et al., 2011) according to the material of the fiber. The sensitivities found are 1 pm/µε and 10 pm/°C, respectively, for strain and temperature as shown Figures 3 and 4. ##### Figure 3: Strain performance of ICFBG sensor. ##### Figure 4: Temperature performance of ICFBG sensor. All the types of FBG as apodized, chirped, uniform, etc. are performing and make an accurate measurement, but our FBG is more robustness and their lifetime is more important even under continuous stress. And the obtained results confirm their good performances and accurate measurement even under random fluctuations. This new sensor is suitable for both optical communication and optical sensing in health monitoring applications due to the acceptable obtained sensitivities to strain and temperature, also due to the weak FWHM. On the other hand, in our SHMS installation, presented in the next section, our basic design is our FBG with all parameters defined above. In addition, we take FWHM as 0.3 nm instead of the real value of 0.226 nm for safety reasons when applied multiplexing. ## FBG sensors as SHMS in bridges As we have talked before, the first bridge that used the FBG technology for its structural monitoring is the Beddington Trail Bridge in Canada. In addition, this is the first extradosed concrete road bridge built in Canada. Also, it is defined as the first smart highway bridge in Canada since it uses a smart SHM system. It began service on November 5, 1993. It uses 20 sensors for measuring strain and temperature along the whole of its length. An image of this bridge which has an average length of 21.03 m is shown in Figure 5. In addition, Table 1 gives other examples of bridges that use the FBG technology as the basis of its SHM system. This table is considered in agreement with the work of Rodrigues et al. (2010). ##### Table 1. Examples of bridges that use the FBG technology as the basis of its SHMS. ##### Figure 5: The Beddington Trail Bridge in Canada (Khalil et al., 2016). On the other hand, Table 2 gives some examples of bridges that use fiber optic technology, and not the FBG technology, as the basis of his system of SHM. This table is also considered in agreement with the work of Rodrigues et al. (2010). And it is very clear from the two tables that the number of sensors in bridges using FBG is much more important than other technologies. This indicates that the technology is booming and very effective. It is growing from one year to another. On the other hand, all the technologies presented in this section are based on optical fibers includes the FBG. Thus, the measurement parameters for these different technologies as the SHMS are based on the different properties of the light propagating in the fiber include phase, intensity, wavelength, and polarization. As an example, the interferometric sensors and the Low Coherence sensors detect the light phase, while our FBG sensor detects the wavelength shift in agreement with the external factor. In addition, the FBG technology is the most effective and can be applied in structural supervision, specifically in civil structures, i.e. bridges as we have seen before. And in the same line, in the following section, we describe the bridge Rades-La Goulette in Tunisia, which is the first of its style in Africa with its system of SHM. ##### Table 2. Examples of bridges that use fiber optic technology, and not the FBG technology, as the basis of SHMS. ## Rades-La Goulette bridge in Tunisia Bridge Rades-La Goulette is the first of its style in Tunisia, and even in Africa. It belongs to the family of concrete extradossed bridges of stay-cable where the deck is suspended by cables from two pylons. This structure is opened to traffic on March 21, 2009. And the final construction cost is approximated by 100 million \$. For the location, the bridge spanning the channel of Lake Tunis, where he connects the area of Rades to the area of La Goulette. The main bridge has a length of 260 meters, a width of 23.5 meters, and a height of 20 meters. It is located in a coastal aggressive environment, see Figure 6. The opening movement is delayed for two years after the initial date of completion. The reason is certainly the nature of the soil, which is very fragile and aggressive. A general picture of the bridge is shown in Figure 7. ##### Figure 6: Location of Rades-La Goulette Bridge in Tunisia. ##### Figure 7: General illustration of the Bridge Rades-La Goulette and their two towers (a) demonstrative model, (b) and (c) realized model. The semi-maritime environmental feature of the bridge and its location geotechnical cause rigorous aggression on its construction. All these properties absolutely accelerate aging. That is why Tunisia has decided to equip it with a system of SHM. On the other hand, manufacturers taking into account the respect of the seismic conditions of work and block their lateral movement. And for the SHMS, manufacturers install throughout the main bridge 55 sensors and a dozen cameras. These devices are connected to a control room where they allow management measured to trigger alerts and alarms in case of the presence of anomaly data. And all the devices of the SHMS are installed on a structure or in different spans of the structure under construction. Table 3 summarizes the various sensors installed in the bridge. This table is similar to that presented in Piot (2009) where the total number of sensors installed is 55. And outside of the measured parameters, we can classify the sensors according to the location where we can find two types. The first is installed on the bridge body, and the second is installed inside the bridge body. The first type, it is intended to measure the phenomena that affect the bridge body, such as wind and ambient temperature. The second type gives the responses of the bridge body against certain parameters such as loads, accelerations, strains, inclinometer, etc. Similarly from the table, the sensors can be classified also according to the static or dynamic measure. For the first type, the data are kind of slow. We can tell they are made with periods that exceed 10 min as the temperature and inclinometer pylons. And for the second type, the data are kind of fast. We can tell they are made with less than 1 sec as the wind and acceleration periods. ##### Table 3. The various sensors installed in the Rades-La Goulette Bridge with these measurement parameters. On the other hand, according to Table 3, we can know the places of installation of all the sensors: • 2 anemometer sensors: they are of ultrasound type. They are located on the deck to measure the direction and speed of the wind. These last two parameters can influence and biases the bridge body. • 12 load cell sensors: they are installed in the stay cables to monitor the load that they carry. • 6 displacement sensors: they are installed in the end pieces of the deck. They give an idea of whether or not the support devices work properly and the thermomechanical behavior is in order. • 5 low-frequency accelerometer sensors: they are installed in the deck to check the spectral response of the latter. • 4 inclinometer sensors: they are located in the two towers. These sensors are used to calculate the rotations of these towers according to two axes. • 7 vibrating wire strain sensors: they are installed in the concrete deck to measure the evolution of the constraints that exist. • 19 temperature sensors: 18 sensors provide thermal information throughout the bridge body. They are installed in the concrete, the stay cables, and the support struts in addition to 1 sensor for the environment temperature which is installed on the bridge body. On the other hand, for each sensor suppliers combine two terminals between which the sensor can grow without triggering an alert. These two terminals are defined by the providers and can set from the results obtained in experiments testing with a learning period. They present the safety margin of a sensor. And so in the case of exceeding one of the two terminals, alerts are instantly displayed in the general interface of the control software. And then the control software can send alerts or reports to the responsible. ## Result of integration of special chirped FBG sensors network on concrete extradosed tower under seismic effects ### Definition In the conventional SHMS installed on the bridge Rades-La Goulette, we find a rotation control of the two towers along two axes with 4 sensors of type inclinometer sensors, 2 sensors for each tower. These sensors provide highly accurate results. However, we are interested to seismic effects on the inclination of the towers with FBG technology as the SHM system. And for the first time, according to our knowledge, the effect is calculated in civil engineering in the structure body using FBG and specially using our SHMS, which will be shown in the next section, because our location, Mediterranean, is one of the places which are subject to the effects of the earthquake and these risks occur increase from one year to the next year in coastal areas. Therefore, the need for such SHMS with precision in all structures like the bridges is required. And yet the large influences of earthquakes on structures, manufacturers do not include the measurement and detection of Earthquake effects during the construction of the bridge structure of Rades-La Goulette. On the other hand, there are several types of towers. The type is selected depending on the application and depending on the location and environment. For example, we found towers: extradosed (Piot, 2009), concrete (Rodrigues et al., 2010), and steel, etc. Figure 8 gives an illustration of the towers in marine and terrestrial environments. As shown, for a sturdy tower, it must define a perfect distance under the sea level or under the ground level. For our studied tower, it is one of the two towers that constitute the body of Rades-La Goulette Bridge, which is of type extradosed. The tower length is 43 meters above sea level, it is distinguished by its triangular shape from the bridge deck, see Figure 7. And as we cannot work with the real tower, we are forced to use an imitated model for the simulation. ##### Figure 8: Illustration of towers in (a) marine (www.dtrf.setra.fr) and (b) terrestrial environment (www.bv.transports.gouv.qc.ca). To make a full monitoring of deflections from the vertical position of the tower, we propose to use three sensors named S1, S2, and S3 of type FBG sensor that presented in the first section of this paper. However, we are interested in computing only the sensitivity of strain which is 1 pm/με. In addition for reasons of cost and multiplexing, we use the three sensors of the same sensitivity to temperature and strain but with different Bragg wavelengths. This system of three FBG sensors network which is installed in a single optical fiber is our SHMS for seismic effects, see Figure 9. And the fiber parameters are the same as used in the first section. For our test, the three central wavelengths are: 1.5401 μ m for S1, and 1.5424 µm for S2, and 1.5449 µm for S3. These values are verified by simulation and by an optical spectrum analyzer (OSA). However, to have these three different central wavelengths, we make a slight stretching for the optical fiber for both side of the location of the Bragg grating in the manufacturing process of each FBG sensor, well as according to the wavelength that we desired to have and also according to our margin wavelength and the multiplexing way. In addition, for the three sensors, the FWHM value is taken equal 0.3 nm instead the real value of 0.226 nm. This value is effective for the reasons of safety and of multiplexing, if the number of sensors installed in the optical fiber is increased more. And the positions of these FBG sensors are located, assumed at the time of construction of the tower, as follows: • The sensor S3 is installed in the bottom of the tower, i.e. 5 meters below sea level in compared to the actual tower. • The sensor S2 is installed almost in the middle of the tower, i.e. 21.5 meters above sea level compared to the actual tower. • -The sensor S1 is installed in the upper part of the tower, i.e. 42 m above the sea level in compared to the actual tower. ##### Figure 9: Three FBG sensors network multiplexing in a single optical Fiber as SHMS for seismic effects. Figures 9 and 10 illustrate the installation of the network of FBG sensors with multiplexing into the optical fiber and the installation of the latter with the sensors in the tower. On the other hand, the tower is installed in a coastal area. So it is in a semi-marine environment and to make it strong, it must be set in the perfect distance below the level of the sea. Similarly, it should mention that the total length of the tower has a large influence on the results of the sensors. Thus, this small SHMS allows us to appreciate, in continuous, the service capacity of the tower, and so the bridge, during his life, where we quantify the normal and exceptional vertical tilt supported by the body of the bridge due to an earthquake. ##### Figure 10: Illustration of the SHMS installed into the tower to measure deflections from the vertical position due to an earthquake. Otherwise, for the function of our SHMS, any effect due to the seismic parameters will exert a stress round about the three FBGs which constitute our SHMS. So shifts in the three Bragg wavelengths will be realized. Each shift is realized according to the power of the seism and according to the position of the FBG sensor. And the measure of the shift with OSA quantifies the deviation angle and thus estimates the service status of the bridge. The following paragraphs describe our tower under seismic problems that are increasingly actually and especially in the future because of environmental changes and the greenhouse effect. And for the first time, the seismic effects are calculated in civil engineering structures using the technology FBG. In addition, to accomplish the control task, we use measuring equipments in addition to the OSA: Laser Source, coupler, PC, and the connection of the optical fiber. ### Results under seismic effects Firstly in this section, we define the various parameters of an earthquake. These parameters are: • Focus or hypocenter: it is the fault zone where product failure. This area is the source of seismic waves. • Epicenter: this is the area of the earth’s surface located vertically at the hypocenter. Geologists indicate that in this area the earthquake intensity is the largest. • The magnitude: this is a measure of the energy of an earthquake. It is measured by the degree unit. And now, all energy of an earthquake is measured by the open scale named Richter. According to this scale, the increase of an earthquake by one degree is equivalent to multiplying by 30 the energy released by the previous degree. • The intensity: this is an approximation parameter for the effects of an earthquake on the soil. It is usually measured by European macroseismic scale (EMS) which includes 12 degrees, wherein the first degree means an earthquake with imperceptible tremors and the last means a total change of the landscape. • Frequency: this is the measured frequency of an earthquake. • The duration of vibration: this is the time between the flow and the end of an earthquake. In general, the seismic frequency is very variable and is about a few Hz. It depends mainly on the distance from the epicenter, the type of earthquake, and also the effects of sites namely: the amplification of the wave, bedrock vibrates at high frequency and low magnitude, whereas the soil vibrates at low frequency but with a high magnitude. In addition, when the seismic degree exceeds about 6° on the Richter scale, according to the history of earthquakes, we are not talking about bending or tilting of the tower since we have a destruction of the structure with a high percentage. For our application, we have chosen the parameters of our earthquake applied to our tower model as 60 km as the distance between the seismic epicenter and the location of the tower, the main frequencies of 5 to 25 Hz, and the magnitude from 0 to 4 degrees in the Richter scale. The last parameter defines quantitatively and characterizes the energy released by the earthquake. In the Richter scale and during the transition from one level to another higher level, the energy released is multiplied by 30. The duration of each test degree is chosen between 3 and 13 sec. And in the case of an experimental imitate model with a reduction scale and to be in line with an imitate reality and also to apply the proper frequencies of the seismic problem; we can install the imitate model of the tower in a sand/clay mixture in a little big water pregnant. The earthquake is provided thanks to the vibrating membrane and a semi engine that is located under the epicenter point, and the waves and their frequencies of seismic waves delivered to the semi-engine and so to the vibrating membrane are provided by a GBF of 5 to 25 Hz. And in this test, this multiplied energy is realized by the semi-engine characteristics. Before seeing the response of the FBG sensor network, we simulated the points where the three FBG sensors are installed in order to calibrate results thereof. These points are defined as points of reference. The results of the tower response in these points when the seismic degree increases from 0° to 4° in Richter scale are shown in Figures 11-13. And according to the vertical deviation which is shown in these latter simulations, we calculate automatically the degree of tilt as shown by the curves in Figure 14. Specifically, according to the curves in Figures 11-13, it is clear that the three positions give three different deviation values. These values increase when the earthquake degree increases. The combined point with the sensor S1 which is installed in the upper part of a tower provides the highest value of deviation for each earthquake degree in comparison with the other two points. In addition, the combined point with the sensor S3 which is installed at the bottom of tower gives a lower value of deviation for each earthquake degree in comparison with the other two points. But from Figure 14, the three curves of the deflection angle provided by the three points are nearly the same. This indicates that we have good results and our simulations are in agreement. However, if we see well, these three curves depart from one another if more increases the degree of the earthquake. This is explained by the presence of cracks where its values increase progressively with the growth of earthquake degree. In the same context, we comment from our results of Figure 14, that when we pass from degree to the next degree, results of deviation and the deflection angle become very large relative to the previous measurement. ##### Figure 11: Deviation response at the point where the sensor S1 is installed under seismic effects. ##### Figure 12: Deviation response at the point where the sensor S2 is installed under seismic effects. ##### Figure 13: Deviation response at the point where the sensor S3 is installed under seismic effects. ##### Figure 14: Deflection angle of the tower given by the three points of reference where the sensors S1, S2, and S3 are installed in term of earthquake degree. Now, we see the simulation results for our SHMS of the three FBG sensors under the seismic influence. The results of the simulation are presented in Figure 15. From the curve presented, which is the same for the three sensors, we note that the shift in the wavelength increases with an increasing degree of the earthquake. Specifically, this increase becomes increasingly important if the earthquake passes from one degree to the next degree. And for the first degree, we have a shift of 10 pm. And for the second degree, we have a shift of 30 pm. And for the third degree, we have a shift of 80 pm. And for the fourth degree, we have a shift of 290 pm. This is quite normal because if the level increases by one degree to another degree, the effects become important. In addition, the curve shown in Figure 15 is similar in behavior to the curve shown in Figure 14. ##### Figure 15: Shifts of the wavelengths of the three FBG sensors network constituting the SHMS for seismic effects. In order to calibrate the results of our SHMS with the results of tilt angles given by the reference points, we have the curve that shown in Figure 16. The obtained curve is a straight line which passes through the origin of equation: $(6)DAngle=4.285×10−4×Wshift$ ##### Figure 16: The deflection angle of the tower as a function of wavelength shift of the FBG sensors constituting SHMS for seismic effects. with D Angle is the tilt angle of the tower and W shift is the shift of the wavelength of any FBG sensor. And with a spectrum analyzer of 1 pm as resolution, the responsible of the structure can notice an inclination of 0.00025° in the tower. On the other side hand, to perform the operation of the network FBG sensors used as SHMS, a conditioner for the equation is necessary. Furthermore, we apply only low degree earthquakes which allow our FBG sensors to make shift based on the realized strains. Thus, the tilts and bending of the tower according to the equation produced. On the other hand, it is clear from the obtained results that when the seismic degree increases perceived shocks increase more and more and so the tilt angle increases significantly. In addition, the cracks will become bigger and their effect will be severe on the life of the tower. And if we exceed 4°, the tilt will become very important and if we exceed 6 to 7°, the tower will be destroyed by a large percentage and sensors give answers that are not understandable. The obtained results are quite normal and consistent with the prediction results but they are given in a very precise manner with the network of FBG sensors. We can say that this small system of three special FBG sensors works properly as SHMS for tilt measurement caused by seismic effects and even by other effects of the same power. Thus, the results obtained in real-time confirm that the model FBG is Smart Sensor (Measures et al., 1994). ## Conclusion Our goal is to validate a proposed SHMS in civil engineering structures such as bridges, stadiums, and buildings for seismic effects. Test procedure and results obtained are discussed in detail. The studied structure is one of two towers of Rades-La Goulette Bridge, which is installed in aggressive environment and coastal area. In addition, this bridge is the first work of its style in Tunisia and in Africa. On the other hand, the manufacturers do not take into account the control and the detection of the seismic effect in the installed SHMS, however, the seismic risk exists. The proposed SHMS is based on FBG technology, especially a new FBG design. This design is efficient as a strain and temperature sensors. Also, it is expand to the field of civil engineering. So, this paper provides a perspective for the construction of new civil engineering structures based on FBG sensors technology proposed as an SHMS in an aggressive environment, where an improve in the role of the conventional monitoring system, as electric monitoring, and strengthening detections more selective and precise is warranty. The little SHMS network proposed of three FBG sensors is installed into the tower. The tower is stressed with seismic effects. As we know their happening risks increase from one year to another in coastal areas. The generated effects are mainly: flexion, deviation, tilt, and crakes. These effects are variables depending on the nature of the land crossed by seismic waves, the distance from the epicenter, the depth of the tower under and above the sea level and also the vulnerability of buildings of tower. In addition, we can mention that the depth of the tower under the sea level affects the obtained results and not the efficiency of the sensors used. For us, we are interested mainly in the tower deviation from its vertical position. The obtained results are the results of an imitate model where the FBG sensors allow us to see the tilt effects of seismic waves on the tower in each degree. They show high efficiency and thus allow a real time monitoring to the structure. In addition, these results are compatible with those expected because our little SHMS is based on FBG sensors characterized by error range does not exceed the 10-12, although imitate model do not take into account the exact topology between the seismic epicenter and the position of the tower where may be the seismic wave undergoes slight amplification. In addition, the bedrock of the tower is not defined exactly. Similarly, our results are confounded with the scale of seismic intensity used in Europe: EMS. Also, it is important to know that an earthquake is often followed by aftershocks. It should be expected. The aftershocks of magnitude are generally lower that the initial earthquake. In addition, in this part, we do not talk about the hydraulic system of a tower in semi-maritime zone. Otherwise, from observing the effects of an earthquake by FBG sensors network, we can characterize the severity of earthquake in surface. The proposed SHMS network can be investigated too in all body of Rades-La Goulette Bridge but with a large number of FBG sensors to replace the maximum number of 55 conventional sensors which are installed as conventional SHMS and to ensure entire coverage of the bridge to monitoring: load, strains, temperature, cable-stayed extensometers, tilt, accelerometer, force, etc. But in this case, the result of tower tilts will be different because we have a new hydraulic system that is installed that respects the equilibrium and giving more rigidity to the tower. Also, we can apply this concept of FBG network to calculate the tsunami effects that attack the coastal area structures. Otherwise, the environment of our example is an earthquake zone nevertheless not subject to cyclical seismic vibrations being in a coastal area where the most from soils subjected to cyclic seismic vibrations. Thus, the manufactures will follow the rules of earthquake-resistant construction, solid foundation, chaining, since we never know exactly the date of the next earthquake. ## References 1. Aubin, S. 2009. Capteur de position innovants: Application aux Systèmes de Transport Intelligents dans le cadre d’un observatoire de trajectoires de véhicules. M.S. thesis, Toulouse University, France. 2. Hill, K. O. , Meltz, G. and Membre, I. E. E. E. 1997. Fiber Bragg grating technology fundamentals and overview. Journal of Lightwave Technology 15(8): 1263–1276. 3. Kang, D. H. , Park, S. O. , Hong, S. S. and Kim, C. G. 2005. The signal characteristics of reflected spectra of fiber Bragg grating sensors with strain gradients and grating lengths. NDT&E International 38: 712–718. 4. Kerrouche, A. , Boyle, W. J. O. , Sun, T. and Grattan, K. T. V. 2009. Design and in-the field performance evaluation of compact FBG sensor system for structural health monitoring applications. Sensors and Actuators A: Physical 151: 107–112. 5. Kersey, A. D. , Davis, M. A. , Patrick, H. J. , LeBlanc, M. , Koo, K. P. , Member, I. E. E. E. , Askins, C. G. , Putnzm, M. A. and Joseph Friebele, E. 1997. Fiber grating sensors. Journal of Lightwave Technology 15(8): 1442–1463. 6. Khalil, A. H. , Heiza, K. M. and El Nawawy, O. A. 2016. “State of the art review on bridges structural health monitoring applications and future trends”, 11th International Conference on Civil and Architecture Engineering, April 19–21. 7. Li, L. , Tong, X. L. , Zhou, C. M. , Wen, H. Q. , Lv, D. J. , Ling, K. and Wen, C. S. 2011. Integration of miniature Fabry-Perot fiber optic sensor with FBG for the measurement of temperature and strain. Optics Communications 284(6): 1612–1615. 8. Lu, C. , Lu, C. , Cui, J. and Cui, Y. 2008. Reflection spectra of fiber Bragg grating with random fluctuations. Optical Fiber Technology 14: 97–101. 9. Measures, R. M. , Alavie, T. , Maaskant, R. , Karr, S. , Huang, S. , Grant, L. , Guha-Thakurta, A. , Tadros, G. and Rizkalla, S. 1994. Fiber optic sensing for bridges. 4th International Conference on Short & Medium bridges, Halifax, pp. 8–11. 10. Palaniappan, J. , Wang, H. , Ogin, S. L. , Thorne, A. M. , Reed, G. T. , Crocombe, A. D. , Rech, Y. and Tjin, S. C. 2007. Changes in the reflected spectra of embedded chirped fibre Bragg gratings used to monitor disbanding in bonded composite joints. Composites Science and Technology 67: 2847–2853. 11. Piot, S. 2009. Une surveillance d’ouvrages en environnement côtier exemple du pont de Radès – La Goulette à Tunis. Coastal and Maritime Mediterranean Conference, Edition 1, Hannamet, Tunisia, pp. 295–300. 12. Raikar, U. S. , Lalasangi, A. S. , Kulkarni, V. K. and AKKi, J. F. 2011. Concentration and refractive index sensor for methanol using short period grating fiber. Optik 122: 89–91. 13. Rodrigues, C. , Félix, C. , Lage, A. and Figueiras, J. 2010. Development of a long-term monitoring system based on FBG sensors applied to concrete Bridges. Engineering Structures 32(8): 1993–2002. 14. Shen, J. and Shen, Y. 2008. Investigation of the structural and spectral characteristics of deposited FBG stacks at elevated temperature. Sensors and Actuators A: Physical 147: 99–103. 15. Sun, L. , Li, H.-N. , Ren, L. and Jin, Q. 2007. Dynamic response measurement of offshore platform model by FBG sensors. Sensors and Actuators A: Physical 136: 572–579. 16. Suresh, R. and Tjin, S. C. 2005. Effects of dimensional and material parameters and cross-coupling on FBG based shear force sensor. Sensors and Actuators A: Physical 120: 26–36. 17. Vorathin, E. , Hafizi, Z. M. , Aizzuddin, A. M. , Zaini, M. K. A. and Lim, K. S. 2019. Temperature independent chirped FBG pressure transducer with high sensitivity. Optics and Lasers in Engineering 117: 49–56. ### FIGURES & TABLES Figure 1: Illustration of the functional principle of FBG sensors. Figure 2: Reflective spectrum of FBG sensor. Figure 3: Strain performance of ICFBG sensor. Figure 4: Temperature performance of ICFBG sensor. Figure 5: The Beddington Trail Bridge in Canada (Khalil et al., 2016). Figure 6: Location of Rades-La Goulette Bridge in Tunisia. Figure 7: General illustration of the Bridge Rades-La Goulette and their two towers (a) demonstrative model, (b) and (c) realized model. Figure 8: Illustration of towers in (a) marine (www.dtrf.setra.fr) and (b) terrestrial environment (www.bv.transports.gouv.qc.ca). Figure 9: Three FBG sensors network multiplexing in a single optical Fiber as SHMS for seismic effects. Figure 10: Illustration of the SHMS installed into the tower to measure deflections from the vertical position due to an earthquake. Figure 11: Deviation response at the point where the sensor S1 is installed under seismic effects. Figure 12: Deviation response at the point where the sensor S2 is installed under seismic effects. Figure 13: Deviation response at the point where the sensor S3 is installed under seismic effects. Figure 14: Deflection angle of the tower given by the three points of reference where the sensors S1, S2, and S3 are installed in term of earthquake degree. Figure 15: Shifts of the wavelengths of the three FBG sensors network constituting the SHMS for seismic effects. Figure 16: The deflection angle of the tower as a function of wavelength shift of the FBG sensors constituting SHMS for seismic effects. ### REFERENCES 1. Aubin, S. 2009. Capteur de position innovants: Application aux Systèmes de Transport Intelligents dans le cadre d’un observatoire de trajectoires de véhicules. M.S. thesis, Toulouse University, France. 2. Hill, K. O. , Meltz, G. and Membre, I. E. E. E. 1997. Fiber Bragg grating technology fundamentals and overview. Journal of Lightwave Technology 15(8): 1263–1276. 3. Kang, D. H. , Park, S. O. , Hong, S. S. and Kim, C. G. 2005. The signal characteristics of reflected spectra of fiber Bragg grating sensors with strain gradients and grating lengths. NDT&E International 38: 712–718. 4. Kerrouche, A. , Boyle, W. J. O. , Sun, T. and Grattan, K. T. V. 2009. Design and in-the field performance evaluation of compact FBG sensor system for structural health monitoring applications. Sensors and Actuators A: Physical 151: 107–112. 5. Kersey, A. D. , Davis, M. A. , Patrick, H. J. , LeBlanc, M. , Koo, K. P. , Member, I. E. E. E. , Askins, C. G. , Putnzm, M. A. and Joseph Friebele, E. 1997. Fiber grating sensors. Journal of Lightwave Technology 15(8): 1442–1463. 6. Khalil, A. H. , Heiza, K. M. and El Nawawy, O. A. 2016. “State of the art review on bridges structural health monitoring applications and future trends”, 11th International Conference on Civil and Architecture Engineering, April 19–21. 7. Li, L. , Tong, X. L. , Zhou, C. M. , Wen, H. Q. , Lv, D. J. , Ling, K. and Wen, C. S. 2011. Integration of miniature Fabry-Perot fiber optic sensor with FBG for the measurement of temperature and strain. Optics Communications 284(6): 1612–1615. 8. Lu, C. , Lu, C. , Cui, J. and Cui, Y. 2008. Reflection spectra of fiber Bragg grating with random fluctuations. Optical Fiber Technology 14: 97–101. 9. Measures, R. M. , Alavie, T. , Maaskant, R. , Karr, S. , Huang, S. , Grant, L. , Guha-Thakurta, A. , Tadros, G. and Rizkalla, S. 1994. Fiber optic sensing for bridges. 4th International Conference on Short & Medium bridges, Halifax, pp. 8–11. 10. Palaniappan, J. , Wang, H. , Ogin, S. L. , Thorne, A. M. , Reed, G. T. , Crocombe, A. D. , Rech, Y. and Tjin, S. C. 2007. Changes in the reflected spectra of embedded chirped fibre Bragg gratings used to monitor disbanding in bonded composite joints. Composites Science and Technology 67: 2847–2853. 11. Piot, S. 2009. Une surveillance d’ouvrages en environnement côtier exemple du pont de Radès – La Goulette à Tunis. Coastal and Maritime Mediterranean Conference, Edition 1, Hannamet, Tunisia, pp. 295–300. 12. Raikar, U. S. , Lalasangi, A. S. , Kulkarni, V. K. and AKKi, J. F. 2011. Concentration and refractive index sensor for methanol using short period grating fiber. Optik 122: 89–91. 13. Rodrigues, C. , Félix, C. , Lage, A. and Figueiras, J. 2010. Development of a long-term monitoring system based on FBG sensors applied to concrete Bridges. Engineering Structures 32(8): 1993–2002. 14. Shen, J. and Shen, Y. 2008. Investigation of the structural and spectral characteristics of deposited FBG stacks at elevated temperature. Sensors and Actuators A: Physical 147: 99–103. 15. Sun, L. , Li, H.-N. , Ren, L. and Jin, Q. 2007. Dynamic response measurement of offshore platform model by FBG sensors. Sensors and Actuators A: Physical 136: 572–579. 16. Suresh, R. and Tjin, S. C. 2005. Effects of dimensional and material parameters and cross-coupling on FBG based shear force sensor. Sensors and Actuators A: Physical 120: 26–36. 17. Vorathin, E. , Hafizi, Z. M. , Aizzuddin, A. M. , Zaini, M. K. A. and Lim, K. S. 2019. Temperature independent chirped FBG pressure transducer with high sensitivity. Optics and Lasers in Engineering 117: 49–56.
2022-05-20 18:12:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884630084037781, "perplexity": 1781.7993461181452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00408.warc.gz"}
https://stats.stackexchange.com/questions/373783/find-umvue-of-theta-where-f-xx-mid-theta-theta1-x%e2%88%921-thetai-0
# Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ As a slight modification of my previous problem: Let $$X_1, X_2, . . . , X_n$$ be iid random variables having pdf $$f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$$ where $$\theta >0$$. Give the UMVUE of $$\theta$$, the Cramer-Rao Lower Bound (CRLB) for unbiased estimators of $$\theta$$ and compute the variance of the UMVUE of $$\theta$$. I have that $$f_X(x\mid\theta)$$ is a full one-parameter exponential family with $$h(x)=I_{(0,\infty)}$$, $$c(\theta)=\theta$$, $$w(\theta)=-(1+\theta)$$, $$t(x)=\text{log}(1+x)$$ Since $$w'(\theta)=1$$ is nonzero on $$\Theta$$, the CRLB result applies. We have $$\text{log }f_X(x\mid\theta)=\text{log}(\theta)-(1+\theta)\cdot\text{log}(1+x)$$ $$\frac{\partial}{\partial \theta}\text{log }f_X(x\mid\theta)=\frac{1}{\theta}-\text{log}(1+x)$$ $$\frac{\partial^2}{\partial \theta^2}\text{log }f_X(x\mid\theta)=-\frac{1}{\theta^2}$$ so $$I_1(\theta)=-\mathsf E\left(-\frac{1}{\theta^2}\right)=\frac{1}{\theta^2}$$ and the CRLB for unbiased estimators of $$\tau(\theta)$$ is $$\frac{[\tau'(\theta)]^2}{n\cdot I _1(\theta)} = \frac{\theta^2}{n}[\tau'(\theta)]^2=\boxed{\frac{\theta^2}{n}}$$ As for finding the UMVUE of $$\theta$$, since $$\frac{1}{n}\sum \text{log}(1+X_i)$$ is unbiased for $$\frac{1}{\theta}$$ then perhaps something similar to $$\frac{n}{\sum\text{log}(1+X_i)}$$ will be unbiased for $$\theta$$. After finding the expected value, I can hopefully make a slight adjustment to get an unbiased estimator. Let $$T=\sum \text{log}(1+X_i)$$ $$\mathsf E\left(\frac{n}{T}\right)=n\cdot\mathsf E\left(\frac{1}{T}\right)=n\int_0^{\infty}\frac{1}{t}f_T(t)dx$$ We must next find the distribution of $$T$$, but first let's find the distribution of $$t=\text{log}(1+X)$$. Let $$Y=\text{log}(1+X)$$. Then \begin{align*} F_Y(y) &=\mathsf P(Y\leq y)\\\\ &=\mathsf P(\text{log}(1+X)\leq y)\\\\ &=\mathsf P(1+X\leq e^y)\\\\ &=\mathsf P(X\leq e^y -1)\\\\ &=F_X\left(-(1+e^y-1)^{-\theta}+1\right)\\\\ &=1-e^{-\theta y} \end{align*} So $$Y\sim \text{exp}\left(\frac{1}{\theta}\right)$$ and hence $$T\sim\text{Gamma}\left(\alpha=n,\beta=\frac{1}{\theta}\right)$$ Hence \begin{align*} \mathsf E\left(\frac{n}{T}\right) &=n\int_0^{\infty} \frac{1}{t} \frac{\theta^n}{\Gamma(n)}t^{n-1}e^{-\theta t}dt\\\\ &=\frac{n\theta}{n-1}\underbrace{\int_0^{\infty}\frac{\theta^{n-1}}{\Gamma(n-1)}t^{n-2}e^{-\theta t}dt}_{=1}\\\\ &=\frac{n}{n-1}\theta \end{align*} It follows that $$\frac{n-1}{n}\cdot\frac{n}{\sum\text{log}(1+X_i)}=\boxed{\frac{n-1}{\sum\text{log}(1+X_i)}}$$ is an unbiased estimator of $$\theta$$ which is a function of the complete sufficient statistic $$T$$, and so by the Lehmann-Scheffe Theorem, it's the unique UMVUE of $$\theta$$. As $$\hat{\theta}\sim(n-1)\cdot\text{Inv-Gamma}(n,\theta)$$ then $$\mathsf{Var}\left(\frac{n-1}{T}\right)=(n-1)^2\cdot\mathsf{Var}\left(\frac{1}{T}\right)=(n-1)^2 \cdot \frac{\theta^2}{(n-1)^2\cdot(n-2)}=\boxed{\frac{\theta^2}{n-2}}$$ Are these valid solutions? • I think you're making a mistake with $E\left(\frac 1{X+Y}\right) \neq E(1/X) + E(1/Y)$ when you check your work – jld Oct 25 '18 at 19:07 • With a change of variables $Y=\ln (1+X)$, I get an exponential density for $Y$ having mean $1/\theta$. This should help you to get a Gamma density for the complete sufficient statistic $T=\sum \ln(1+X_i)$, and hence an unbiased estimator of $\theta$ based on $T$. Oct 25 '18 at 19:32 • No. From the transformation formula, pdf of $Y=\ln(1+X)$ is $$f_Y(y)=f_X(e^y-1)\left|\frac{dx}{dy}\right|\mathbf1_{y>0}$$, where $f_X$ is the pdf of $X$. (Note that $x>0\implies y>0$) Oct 25 '18 at 19:58 • I used the CDF method, as I am more familiar with it, and came to the same conclusion. I will try to get get the unbiased estimator from here and update my answer. I appreciate your help. – Remy Oct 25 '18 at 20:09 • The CRLB is correct. Oct 26 '18 at 6:31 $$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$ The simplest way to find the UMVUE estimator for $$\theta$$ is to appeal to the Lehmann-Scheffé theorem, which says that any unbiased estimator of $$\theta$$ which is a function of $$T$$ is the unique UMVUE. To find an estimator with these properties, let $$T_i = \ln(1+X_i)$$ and observe that $$T_i \sim \text{Exp}(\theta)$$ so that $$T \sim \text{Gamma}(n,\theta)$$. Hence, we can use the complete sufficient statistic to form the estimator: $$\hat{\theta}(\mathbf{X}) = \frac{n-1}{T(\mathbf{X})} = \frac{n-1}{\sum_{i=1}^n \ln(1+X_i)} \sim (n-1) \cdot \text{Inv-Gamma}(n,\theta).$$ From the known moments of the inverse gamma distribution, we have $$\mathbb{E}(\hat{\theta}) = \theta$$ so we have found an unbiased estimator that is a function of the complete sufficient statistic. The Lehmann-Scheffé theorem ensures that our estimator is UMVUE for $$\theta$$. The variance of the UMVUE can easily be found by appeal to the moments of the inverse gamma distribution. • Thank you. I had not heard of the inverse gamma distribution until now. I have updated my answer attempting to solve for the variance of the UMVUE. Why is it not $\text{Inv-Gamma}(n,\frac{1}{\theta})$? – Remy Oct 26 '18 at 3:16 • Never mind, my textbook has a different way of expressing the gamma distribution than wikipedia. My textbook has $f(x\mid\alpha,\beta)=\frac{1}{\Gamma(\alpha)\beta^{\alpha}}x^{\alpha-1}e^{-\frac{x}{\beta}}$ – Remy Oct 26 '18 at 3:40 • @Remy If your UMVUE is $\hat\theta=\frac{n-1}{T}$, then \begin{align} \operatorname{Var}(\hat\theta)&=(n-1)^2\operatorname{Var}\left(\frac{1}{T}\right) \\&=(n-1)^2\left[E\left(\frac{1}{T^2}\right)-\left(E\left(\frac{1}{T}\right)\right)^2\right] \end{align} , which you can calculate from the distribution of $T$. Oct 26 '18 at 5:15 • @StubbornAtom Is what I have after using the method suggested by Ben incorrect or are you just suggesting an alternative approach? – Remy Oct 26 '18 at 5:25 • @Remy I was suggesting a direct approach (without using Inverse Gamma distribution) since you only need to calculate a couple of moments. Oct 26 '18 at 5:34
2022-01-21 23:42:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 49, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8943536281585693, "perplexity": 262.0454324503713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00175.warc.gz"}
http://www.stat.cmu.edu/research/publications/applying-non-parametric-robust-bayesian-analysis-non-opinionated-judicial
## Applying Non-parametric Robust Bayesian Analysis to Non-Opinionated Judicial Neutrality April, 1999 Tech Report ### Author(s) Joseph B. Kadane, Elias Moreno, Maria Eglee Perez and Luis Raul Pericchi ### Abstract This paper explores the usefulness of robust Bayesian analysis in the context of an applied problem, finding priors to model judicial neutrality in an age discrimination case. We seek large classes of prior distributions without trivial bounds on the posterior probability of a key set, that is, without bounds that are independent of the data. Such an exploration shows qualitatively where the prior elicitation matters most, and quantitatively how sensitive the conclusions are to specified prior changes. The novel non-parametric classes proposed and studied here represent judicial netrality and are reasonably wide so that when a clear conclusion emerges from the data at hand, this is arguably beyond a reasonable doubt.
2017-12-11 02:29:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084262609481812, "perplexity": 2766.64848763461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00098.warc.gz"}
http://alexschroeder.ch/wiki?action=collect;match=%5E2013-12
Search: Matching Pages: # 2013-12-02 Emacs Defaults I saw Andrew Hyatt post on Google+: From the emacs-devel list comes a query for builtin packages that would be useful to enable by default. They are interested in getting feedback from the community. If you have a builtin package you think should be enabled by default, add a comment, and we circulate this back to the emacs-devel list. Check it out via Gmane, and now a survey on Emacs Wiki. I skimmed my ~/.emacs and looked at all the little settings I think would make better defaults… For the complete file, see my current config file for Emacs on Windows. (show-paren-mode 1) (winner-mode 1) (windmove-default-keybindings) (column-number-mode 1) (savehist-mode 1) (iswitchb-mode 1) (global-set-key (kbd "C-x C-b") 'bs-show) (require 'dired-x) (setq dired-recursive-deletes 'top dired-recursive-copies 'top dired-dwim-target t) (setq sentence-end-double-space nil) (dolist (hook '(emacs-lisp-mode-hook lisp-mode-hook rcirc-mode change-log-mode texinfo-mode-hook)) (eldoc-mode 1) (set (make-local-variable 'sentence-end-double-space) t)))) (setq eshell-save-history-on-exit t) (setq cperl-hairy t cperl-electric-parens 'null) (lambda () (local-set-key (kbd "C-h f") 'cperl-perldoc))) (lambda () (rcirc-track-minor-mode 1))) (defun describe-hash (variable &optional buffer) "Display the full documentation of VARIABLE (a symbol). Returns the documentation as a string, also. If VARIABLE has a buffer-local value in BUFFER (default to the current buffer), it is displayed along with the global value." (interactive (let ((v (variable-at-point)) (enable-recursive-minibuffers t) val) (if (and (symbolp v) (hash-table-p (symbol-value v))) (format "Describe hash-map (default %s): " v) "Describe hash-map: ") obarray (lambda (atom) (and (boundp atom) (hash-table-p (symbol-value atom)))) t nil nil (if (hash-table-p v) (symbol-name v)))) (list (if (equal val "") v (intern val))))) (with-output-to-temp-buffer (help-buffer) (maphash (lambda (key value) (pp key) (princ " => ") (pp value) (terpri)) (symbol-value variable)))) (define-key isearch-mode-map (kbd "C-h") 'isearch-highlight-phrase) (put 'narrow-to-region 'disabled nil) (put 'not-modified 'disabled t) (put 'upcase-region 'disabled nil) (put 'downcase-region 'disabled nil) Tags: # 2013-12-02 New Creative Commons Licenses November 25, 2013. Creative Commons released 4.0 versions of their licenses. Yeah! More info on their blog. Cory Doctorow says the following on BoingBoing, which is where I learned about the new versions: The new licenses represent a significant improvement over earlier versions. They work in over 60 jurisdictions out of the box, without having to choose different versions depending on which country you’re in; they’re more clearly worded; they eliminate confusion over jurisdiction-specific rights like the European database right and moral rights. They clarify how license users are meant to attribute the works they use; provide for anonymity in license use; and give license users a 30 day window to correct violations, making enforcement simpler. Amazingly, they’re also shorter than the previous licenses, and easier to read, to boot. I must say, I was always a bit annoyed when I saw the local versions of Creative Commons licenses. What does it mean for me, when I live in Switzerland, host stuff in the US, and said stuff is based on the Canadian port of the license? The FAQ now says: “Unless you have a specific reason to use a ported license, we suggest you consider using one of the international licenses.” I also often wondered about additional rights we have here in Europe. For example, I might allow you to make copies of my face, but I can still control the use of said copies here in Switzerland using my “personality” rights. The blog post announcing the 4.0 versions of the licenses now says: “Where the licensor has publicity, personality, or privacy rights that may affect your ability to use the material as the license intends, the licensor agrees to waive or not assert those rights.” Tags: # 2013-12-06 Organizing PDFs Wow. We are definitely living in plentiful times. So now I had eight zip files in my download folder and I realized another problem: There’s a folder on my external disk for RPG PDFs and its directory hierarchy is multiple levels deep. Dungeons & Dragons or Pendragon? TSR or Frog God Games? Basic D&D accessory or module? Sorting all the PDFs into the folders is driving me nuts. And what’s worse: my iPad has a PDF reader (Good Reader) with it’s own, separate copy of a subset of these files. And in my idiocy I used a different directory structure on the iPad. So now I need to figure out what to do, and how to keep the two in sync without my Repeated Strain Injury flaring up. Tags: You could organize them all in Dropbox. That’ll let you sync across devices. Derik 2013-12-06 14:49 UTC Unfortunately there are various issues I see: I don’t think I have enough quota left for those files (some of these RPG files are 50MB and more). I don’t trust these companies so I will still need backups. On a crippled device like my iPad, having Good Reader access Dropbox involves copying the files again, which still leaves me with the question of keeping it all in sync. And finally, US copyright exceptions are different from Swiss copyright exceptions (fair use vs. Eigengebrauch) so I feel like I’m inviting trouble by trusting a US company. I guess I could use Wuala, but it’s interface with Good Reader would be even more cumbersome. AlexSchroeder 2013-12-06 16:22 UTC This is how I do it. All my RPG PDFs are in ~/Dropbox/RPG/ and sorted into subfolders one level deep. No hierarchy beyond that, unless a single product happens to be a directory. One of the subdirectories is named “Tablet” which I sync to my ipad using GoodReader directly (so no copying, really). That directory only contains copies of the PDFs that I want always on my tablet (currently, 142 files). I could sync the entire RPG hierarchy, but don’t see the need. This involves no trust of Dropbox, as the files are also in my home directory on both my laptop and desktop, and they get backed up to external media. Brendan 2013-12-06 23:36 UTC I like the simple structure you propose, and I like the idea of keeping all the files for the tablet in a separate directory which is easy to sync. If you use a simple structure, moving files in to and out of the “tablet” directory is no hassle. I see it clearly now: my directory structure is complicating things. I need to get rid of it. AlexSchroeder 2013-12-07 00:08 UTC Wow, two months later and I have finally merged all the crap on the iPad with the stuff on my hard disk. As I keep buying more and more stuff (D&D Classics!), I get more and more updated products when I visit the site again. But I think I figured it out, now. On my Mac, the folders are shared using the File Sharing settings and the Option “Share files and folders using AFP”. I keep the directory structure very flat. On my iPad, GoodReader is sync’ing some selected folders from this hierarchy. When I download stuff on the iPad, these files all stay in either in the root folder or in the My Downloads folder. No renaming and moving of these files! When it’s time to “clean up”, the files are moved to the Mac, filed appropriately, and if they’re in the right directory, they will get sync’ed back to the iPad. It seems to work. I’m happy. AlexSchroeder 2014-02-20 17:51 UTC # 2013-12-08 Old School RPG Planet Going Down Recently John Payne talked about distributing RSS feeds in eBook form on Google+. Interesting idea, and the resulting discussion of copyright and feed aggregation soon touched upon the Old School RPG Planet. Ian Borchardt correctly said “Just because the authors post their work to the web doesn’t mean they forfeit their copyright. If you collect this work into another form, you are violating their copyright.” Andy Standfield replied “This has all already been covered by many courts and legal experts. This is all considered fair use.” I started to wonder. Many courts? I decided to google for some more information and found What’s the law around aggregating news online? A Harvard Law report on the risks and the best practices. This 2010 article said that all the parties settled before a finding was made. In the US, that would mean we don’t really know. The article also has a longer section about the Fair Use test and how to apply it. In addition to that, the situation would be different outside the US – possibly more restrictive here in Switzerland, for example. Drinking my coffee I thought about it some more and finally decided to take the Old School RPG Planet offline. I wasn’t really using it anymore and I really dislike the idea of further discussions with annoyed blog authors. I also didn’t feel like contacting a hundred bloggers, most of whom don’t have their email address on the front page of their blog. The site should now redirect to the Legacy D&D section of the RPG Bloggers Network. It supposedly does more or less the same thing, except that the authors have to register their own sites. Too bad the RPG Blog Alliance doesn’t have categories. Tags: Comments on 2013-12-08 Old School RPG Planet Going Down Hey Alex, Too bad that some idiots ruined the good thing you started. Thanks for all the hard work. Tedankhamen 2013-12-08 16:12 UTC Don’t worry, I don’t think that a particular person is to blame. I blame it on the copyright system we have and the companies and individuals pushing it, extending it, spreading their interpretation of it until we end up in the society we are living in. AlexSchroeder 2013-12-08 17:14 UTC Alex, This is a real bummer. I used your OS RPG Planet exclusively to keep me informed of what was going on. Legacy doesn’t seem to contain the same coverage of blogs and it was always handy using the Planet to check out those old dormant blogs. Well, it was good while it lasted. Thanks for keeping me up to date on the OSR. – derv 2013-12-08 18:16 UTC Perhaps there are some alternatives available out there? This is what I had listed on the wiki. Wow, the list contained a lot of dead links and a spam link, too. If anybody knows more, leave a note! AlexSchroeder 2013-12-08 18:23 UTC I just want to say, it’s a gradation, right? I follow several law and copyright blogs, and this is why we need them, because a lot of these situations aren’t clear cut. It’s a complicated issue. -C 2013-12-09 08:32 UTC Yes, of course. In addition to the situation not being clear cut, the particular elements I considered are particular to just me. There was my own lack of use, first of all, which made me unwilling to fight the slightest non-technical problem. As far as I can see, there were two big non-technical problems: The bigger problem was the law itself. I already knew that I was unwilling to fight a legal battle. I was unwilling to serve as a precedent. Let somebody else drink this cup. With me not being a US resident, the situation is even trickier. No thanks. The lesser problem was the possibility of people complaining. Many months ago I already got a terse email from Alexis, telling me to remove his blog. The possibility of having my name dragged down into the gutters, of people calling a project of mine seriously a dick move or saying that it wasn't fair or honest currently make me very unwilling to do anything at all for people playing role-playing games. And not everybody would have had Alexis’ calm. M. W. Schmeer didn't want a private conversation at the time, for example. Ugh. No thanks. I think I’m just going to run my games, write my blog… « Il faut cultiver notre jardin. » But: if anybody is interested in running their own site, I can help! The first step is installing Planet Venus which requires an installation of Python. The config file is also easy. Here’s what I used. The example config file in the Planet Venus distribution also comes with a commented config file. When I first started, I used the “musings” theme; later I wrote my own. [Planet] name = Old School RPG Planet message = Collecting Old School RPG blog feeds for the curious. owner_name = Alex Schroeder owner_email = kensanata@gmail.com cache_directory = /home/alex/planet/rpg log_level = INFO output_theme = /home/alex/src/old-school-planet-theme output_dir = /home/alex/campaignwiki.org/planet items_per_page = 100 activity_threshold = 120 filters = excerpt.py [excerpt.py] omit = strong em b i u width = 1000 If you are running Venus for yourself only, you might consider deleting the filters setting. What follows is the list of blogs you want to subscribe to, with their names: [http://nilisnotnull.blogspot.com/feeds/posts/default] name = ((nil) is (not(null))) [http://www.msjx.org/feeds/posts/default] name = . . lapsus calumni . . [http://www.theskyfullofdust.co.uk/feed/] name = ...and the sky full of dust. ... AlexSchroeder 2013-12-09 08:57 UTC Ok, I know I’m not the best person to reply, due to my own knee jerk reaction. But I have this to say. Your work is appreciated by the silent many. Also, me personally. I cannot tell you the impact your comment to me about how many blog posts of mine you had favorited had. I still recall it clearly. My point is, is that the people who are a--holes, also have that kind of impact. If you recall, there were a few dudes who were ON FIRE about the fact I was talking about how to run traps. Not Loomis, the guy behind Grimtooth, not any of the other publishers who made traps that I talked about. But just some person on the internet. His random negative comment sapped more of my enthusiasm for blogging then anything before or since. But I learned something from it. The more successful you are, the more certain insecure, jealous, and often untalented people will hate you for it. So what it means when you get a comment like that is that you are really doing something that is meaningful. Not that greatness is applicable, but every great thing ever done was hated by thousands. The hate isn’t what they remember. People don’t talk about that one guy who was pissed off. When my traps get mentioned, its as an appreciated resource. What I’m saying is that what I learned, was a reaction like that is a sign that you are doing well, not poorly. I mean, as long as you are remaining introspective as the tone of this post indicates you are. Thanks for your reply and the time it took to link those arguments. That situation sucked. I’m telling you thank you, and water off a duck’s back. -C 2013-12-09 09:47 UTC Thanks, -C. Very much appreciated. AlexSchroeder 2013-12-09 09:54 UTC Wow. OSRPG Planet was one of the first sites I checked every day. Sad to see it go. And, since I never said it: thanks for providing that service for so long, I really enjoyed it. – Max 2013-12-09 12:37 UTC Thanks. I just had another thought: It should be possible to filter the RPGBA feed for blogs I consider to be OSR and republish that. Then again, somebody is probably going to say they only intended the RPGBA to republish their feed. I’d have to ask Jeff. Perhaps he’d host something. Update: I asked Jeff and he said, he’d prefer to get the bloggers’ permission or consent to do it. Which brings me back to square one. Too much hassle. AlexSchroeder 2013-12-09 12:58 UTC I am incredibly sad to see this happen. OSRPlanet was one of my daily go-to sites for what was going on. It will be missed greatly. Thanks for all the hard work, Alex. Sniderman 2013-12-10 14:03 UTC Sorry to see it go - i get plenty of unauthorised feed sites leeching me your was only one i liked chris 2013-12-10 15:45 UTC Thank you. AlexSchroeder 2013-12-11 08:56 UTC On The Nine and Thirty Kingdoms there is a blog post talking about the situation. I totally understand all the points about copyright, licensing and all that. The only point I want to pick up is the closing paragraph: In other words, you’d have to ask me first. And really, why wouldn’t you ask someone first before publishing their work? What is everyone afraid of? I think the short answer is that asking for permission just doesn’t scale. It’s OK to ask one person, but asking a hundred people is not how I want to spend my time. The long answer is in the pages of the Free Culture book. Just search for the word “permission” and learn about the differences of permission culture and free culture. Here’s a paragraph from page 192f: The building of a permission culture, rather than a free culture, is the first important way in which the changes I have described will burden innovation. A permission culture means a lawyer’s culture—a culture in which the ability to create requires a call to your lawyer. Again, I am not antilawyer, at least when they’re kept in their proper place. I am certainly not antilaw. But our profession has lost the sense of its limits. And leaders in our profession have lost an appreciation of the high costs that our profession imposes upon others. The inefficiency of the law is an embarrassment to our tradition. And while I believe our profession should therefore do everything it can to make the law more efficient, it should at least do everything it can to limit the reach of the law where the law is not doing any good. The transaction costs buried within a permission culture are enough to bury a wide range of creativity. Someone needs to do a lot of justifying to justify that result. I recommend the book. It’s a long read, but I liked it. It also made me unwilling to spend time asking people for permission to do anything. I’d rather spend my time elsewhere. So that’s my answer to “What is everyone afraid of?” I’d rather spend my time elsewhere. AlexSchroeder 2013-12-18 08:59 UTC # 2013-12-10 Writing Your Own RPG Rules I started thinking about it when Johnn Four said on Google+ that he was interested in designing his “own little OSR game”. Like Joseph Bloch, I wondered. It doesn’t sound like Johnn really wants to run and play an OSR game. He’s just interested in designing the rules? There are already so many of them out there! All these Fantasy Heartbreakers… What is a Fantasy Heartbreaker? I learned about the term from Ron Edward’s essay. They are “truly impressive in terms of the drive, commitment, and personal joy that’s evident in both their existence and in their details” and “but a single creative step from their source: old-style D&D.” Since I like classic D&D, that’s not a problem for me. Here’s how Ron ends his essay: They designed their games through enjoyment of actual play, and they published them through hopes of reaching like-minded practitioners. […] Sure, I expect tons of groan-moments as some permutation of an imitative system, or some overwhelming and unnecessary assumption, interferes with play. But those nuggets of innovation, on the other hand, might penetrate our minds, via play, in a way that prompts further insight. Let’s play them. My personal picks are Dawnfire and Forge: Out of Chaos, but yours might be different. I say, grab a Heartbreaker and play it, and write about it. Find the nuggets, practice some comparative criticism, think historically. Get your heart broken with me. This essay, I think, mentions all the important parts: • actual play is the basis for your game • publish it in the hopes of reaching like-minded gamers • make sure to strip all the material that you aren’t using yourself • focus on the innovations I also like to read the design decisions somewhere, on a blog for the game, perhaps. Why add skills? Why drop Vancian magic? Why drop descending armor class? Why use fewer saving throws? Why add bennies? Why rework encumbrance? As for myself, I’m basically using Labyrinth Lord. I’ve been thinking about skills, magic, spells, armor class, saving throws, bennies, and writing about these issues on this blog. And as I’ve said on Johnn’s post: “I just kept running my game and started putting my house rules on a wiki. Then I copied the missing elements from the book. Then I put it all into a LaTeX document. And I keep running my game and I keep making changes to the rules. And that’s it.” For a while I had an English and a German copy of these rules on a wiki. After a while I abandoned the wiki and the English rules and moved the German text to LaTeX. I think the important part was thinking about the rules, writing about the rules, changing the rules, reassembling the rules, having something to show others, a place to collect the house rules… and with all that achieved, there’s just nothing to do but make the occasional update. I’m not trying to convince anybody else to use the rules. But if you’re looking for something a bit different, perhaps you can find “those nuggets of innovation” in my rules, too. • 💔 • What are those those nuggets of innovation you ask? I think the only thing that’s truly new is how I write the document making full use of a sidebar to comment the main text. And I keep track of my player’s reputation with the various gods of the setting. Everything else I have seen somewhere else: Death & Dismemberment, using 1d6 for thief skills, using a d30 once a night, using 1d6 for weapon damage, limiting the repertoire of arcane casters… Nothing new under the sun. But I’d be happy to pontificate talk about all these points. Tags: Thank you for all of this. It’s very validating. Dither 2013-12-10 20:43 UTC Great points, Alex, thanks. – Johnn 2013-12-18 19:58 UTC # 2013-12-12 Treasure for an Elven Sorceress My players finally returned to the Elfenburg, the halfling village at the most beautiful waterfall of the Wilderlands of High Fantasy. I had previously established that the village was built around a green glass tower which was later found to have been built by elves, and down in the dungeon there was a room with 86 petrified elves and a gorgon bull (the kind with petrifying breath). The story was this: The elven sorceress used earth magic to erect the glass tower. Something poisoned the earth blood and caused the building to turn evil. The sorceress decides that she needs to boost her spell casting power and leaves, looking for a power-up. Since elves cannot reach the level necessary to cast stone to flesh I decided that all this stone magic had to be based on long and elaborate rituals. The sorceress decided that the Aakom of Qelong would do and disappears. Before leaving, however, she decides to use the gorgon bull to petrify her people “for storage” and to keep them safe against the ravages of time. I occasionally force myself to make elves strange and alien. Before she does that, one of the elves decides to leave a note basically explaining the situation and asking for help “just in case the sorceress doesn’t return”. The party does eventually find the sorceress and she—having partaken of way too much Aakom and on the way to immortality—has more or less forgotten about her people. She gives the party 50 scrolls of stone to flesh and a magic ring with which to claim her glass tower as their own because she couldn’t care less. Another attempt of mine to make elves and immortals alien and basically indifferent to human concerns. The party returns, depetrifies half the elves, they are grateful, some setting background is revealed, the elves join the party’s domain (since we’re using An Echo Resounding), and so on. I was left with the question of treasure. What to do? I had already decided that the sorceress’ many books had been carried off by a floating, round monster… This is cool, because now I have more plot hooks for future adventures: 1. the earth blood is still poisoned and it would seem that looking for the Tomb of Abysthor would make a nice follow-up adventure 2. the other half of the elven tribe is still petrified so if we feel like doing an all-elven road-trip adventure on the side, we can always sail for Tula and its School of Chromatic Magic and find a high-level human magic-user to get some scrolls 3. if we want to challenge the high level player characters we can always go and hunt for the notes on the various rituals used to build the living glass tower 4. the elves told the party what it would take to revive the dead elven god Arden… one my the player characters is very interested in this particular plot line… I decided that the only treasure to be found beyond ownership of the tower and the service of the elves was the wardrobe of the sorceress. 1. an anti-gravity shawl made of the finest silk, floating freely in the air 2. a spectacular white and cyan dress of air and sky, constantly trailing clouds and having occasional blinding spots of sunlight 3. a dark sea green dress of the deep, a magic train of illusionary ocean following the wearer 4. a brown dress of stone and metal, of crystals growing from your back and spikes growing from your shoulders, a golem dress of enchanted earth The room being a giant loft under a fortified glass dome in art nouveau style, guarded by ten nearly invisible glass tentacles to prevent unauthorized landing (as long as the earth blood is poisoned these tentacles are evil and attack everybody), a random distribution of little mirrors such that there is one exact spot near the wardrobe where all the mirrors align giving you a perfect view or yourself… My wife loved it. But is it treasure? I’m not sure. I don’t plan on granting simple mechanical benefits. I do plan on it impressing and awing the common folks, if my players ever want to use it like that. Or of granting boons with other elf lords if given away as a present. I also didn’t plan on giving it monetary value and therefore not granting any XP. My players seemed happy enough. If I wanted to award some XP I’d probably say that the four items I listed could be sold for 10,000 gold pieces each for a grand total of 40,000 XP. Except that my players got to the end of a different story arc last session and ended up with 171,000 gold pieces (this included the sale of a Neogi spelljammer ship and the treasure of its captain) and the part about the elven sorceress and her tower was essentially dénouement. That’s why it didn’t grant any gold or XP. Tags: # 2013-12-13 Session Preparation Process Over on The Rusty Battle Axe the author says “Megadungeons were all the rage in the tabletop RPG blogsphere back in 2008-2009. There are plenty of posts on megadungeon dating back to the period.” I still haven’t run a Megadungeon, even though I have bought several: Stonehell, Rappan Athuk (twice), Castle Whiterock, Tomb of Abysthor, … I feel like I would like big dungeons. When I prep for my games, however, that’s not how it works. If you look at my Swiss Referee Style Manual you’ll note that it is practically without advice for dungeon adventures. That’s because my dungeons are small. Usually they are one page dungeons. How come? When I prepare a game, I usually start from the answers I got at the end of the last session. I almost always end by asking: “So, what are we going to do next time?” I write down the relevant non-player characters: people to meet, people to oppose, people with jobs to hand out, with quests that need resolving. I usually end up with one to four characters. The bad guys are usually in a defensible position so I prepare spells, minions, rooms, a map… a one page dungeon, a ship deck plan or a village with a few important buildings (in which case I won’t prepare floor plans for the buildings). I usually end up with four or five buildings or ten to twenty rooms, a sketch of a map. I think of complications. This is usually something that works in layers. Every two or three days I have a lame idea that I mentally add to the adventure. After two weeks, however, five lame ideas make a cool complication. I usually don’t think of a solution. The gargoyles want the player characters to leave, consider themselves to be /Übermenschen/… The elves are petriefied by a gorgon bull and the players have neither the saves nor the spells to survive a direct encounter. As it turns out they managed to get the gargoyles to accept their commands and the gargoyles brought all the petrified elves to the survace. I think of rewards. I usually start by rolling on the appropriate treasure tables and embellish the magic items, if any. A armor +1 and shield +1 turn into the golden halfling armor of Priamus Bullfighter who disappeared 21 years ago from Elfenburg. The sword +1 turns into the blade of the herring knight, smells of fish allows the wearer to feel how far away the next air bubble is… I make a mental note to add the remaining equipment of said knight to the dungeon or future adventures… If my players decide to follow up I will place his city or temple on the map, and add the protector saint of all fish, and his paladins, and their special abilities, and there will be rescue missions, and favors to be granted… Working iteratively is important for my process. I try to pull in non-player characters from very old sessions. Wespenherz, the new hireling, is an elf that they had rescued from bad guys in a previous campaign up in the north. This ring they just found was forged by Qwaar the Axiomatic and didn’t Muschelglanz write a book about the rings of Qwaar? Yes he did and as far as he know he had decided to investigate the Barrowmaze and never returned… It gives depth to the campaign, some players remember and start digging through the campaign wiki, older players explain newer players what happened back then, … I love it! This sort of thing doesn’t come easy. As I said, every two or three days I have a lame idea, but after two weeks I’ve had enough lame ideas that together they make the game better. Much better. The process also shows why it’s hard to integrate megadungeons. When I look at them, I want to skim them for interesting non-player characters my players would want to contact, for prison cells my players would want to rescue interesting non-player characters from, for interesting rewards my players might want to be looking for. And that’s so damn hard. I still remember placing the Barrowmaze in my campaign and having an important non-player character flee towards it. I was using Nualia, an evil fighter with an evil sword from a Pathfinder adventure and decided that her dad was a priest of Nergal who lived in the Barrowmaze and was involved in the power struggle. Then I had an evil authority person from Kerealia fleeing towards the Barrowmaze because the players had ousted him. The dungeon itself was ok, not overwhelming. We rescued a dwarf from a pit and he’s still with us. Once the players fell into a bottomless pit and dropped into the astral sea, they never had the urge to return, hower. Having Muschelglanz disappear in the Barrowmaze is my new attempt at letting the Barrowmaze play a role in my campaign. It hasn’t managed to make itself important. I was unable to find or emphasize anything in the megadungeon itself that would motivate my players to return again and again. The reason I thought of using the Barrowmaze again was that one player decided to offer a reward for the return of Muschelglanz. I proposed to my players to play a random party of first level dudes trying to claim this reward by going into the Barrowmaze and finding Muschelglanz. They liked the idea so that’s what we’re going to do. When I started preparing for the next session, I did what I usually do. I looked for cells to put him in. I looked for the headquarter of a faction that held him prisoner. I tried to find the important non-player characters and I tried to find a thing that Muschelglanz might have been looking for. I know that there is supposed to be a tablet somewhere. But everyhing else has been tricky. I really need to skim it again. Gaaah. Now you know what I would appreciate in a megadungeon. Just in case you’re writing a megadungeon. # 2013-12-17 Boxed Text -C from Hack & Slash has a blog post on the definitive inadequacy of boxed text. I agree with the points made. A few days ago I mentioned how I usually run one page dungeons. There is not enough PDF for these dungeons to have boxed text! I need my notes on my map. About a month ago I mentioned Bryce's Adventure Design Contest (deadline is Dec 31, 2013!) and I think my To Rob A Witch submission illustrates how I like to do it, even if there is no map but a sort of loose flowchart. To Rob a Witch by kensanata, on Flickr Andreas Davour recently said that he’d like to run the Saturday Night Specials, all the time. I think my format does that. As I said in a comment on his blog post: “A flow chart that only mentions the interesting locations and the random encounters between them? It’s what I try to do theses days.” And no boxed text. Tags: # 2013-12-17 PDF Button I’m experimenting with a PDF button for this website. In the past, I suggested Print Friendly & PDF. Yesterday I learned about wkhtmltopdf, which does the same thing without depending on a remote service and their ad revenue. On a typical Debian host, you need to apt-get install wkhtmltopdf. This installs a binary and all the required libraries. The problem is that this version needs an X11 server in order to work, which you don’t have when using it on your website. In addition to a regular installation, you need to install a statically compiled binary which has been compiled with a patched version of Qt and no longer requires an X11 server. $Action{pdf} = \&DoPdf; push(@KnownLocks, 'pdf'); sub DoPdf { my$id = shift; RequestLockDir('pdf'); local $StyleSheet = 'http://alexschroeder.ch/alex-2012.css'; my$html = PageHtml($id); my$source = "$TempDir/document.html"; my$status = '500 INTERNAL SERVER ERROR'; open(HTML, '>:utf8', $source) or ReportError("Cannot write$source: $!",$status); print HTML GetHtmlHeader(NormalToFree($id),$id); print HTML $q->start_div({-class=>'header'}); print HTML$q->h1({-style=>'font-size: x-large'}, GetPageLink($id)); print HTML$q->end_div(); # header print HTML $q->start_div({-class=>'wrapper'}); # get rid of letter-spacing my$sperrung = '<em style="font-style: normal; letter-spacing: 0.125em; padding-left: 0.125em;">'; $html =~ s/$sperrung/<em>/g; my $newthought = '<em style="font-style: normal; font-variant:small-caps; letter-spacing: 0.125em;">';$html =~ s/$newthought/<em style="font-style: normal; font-variant:small-caps">/g; print HTML$html; # see PrintFooter print HTML $q->end_div(); # wrapper print HTML$q->start_div({-style=>'font-size: smaller; '}); print HTML $q->hr(); print HTML$FooterNote; # see DoContrib SetParam('rcidonly', $id); SetParam('all', 1); my %contrib = (); for my$line (GetRcLines(1)) { my ($ts,$pagename, $minor,$summary, $host,$username) = @$line;$contrib{$username}++ if$username; } print HTML $q->p(Ts('Authors: %s', join(', ', map { GetPageLink($_) } sort(keys %contrib)))); print HTML $q->end_div(); # footer print HTML$q->end_html; print HTML "\n"; close(HTML); my $target = "$TempDir/document.pdf"; my $error = /home/alex/bin/wkhtmltopdf --print-media-type --quiet '$source' '$target'; ReportError("The conversion of HTML to PDF failed",$status) if $error; open(PDF, '<:raw',$target) or ReportError("Cannot read $target:$!", $status); local$/ = undef; my $pdf = <PDF>; close(PDF); ReportError("$target is empty", $status) unless$pdf; binmode(STDOUT, ':raw'); print $pdf; ReleaseLockDir('pdf'); } sub PrintMyContent { my$id = UrlEncode(shift); if ($id and$IndexHash{$id}) { print qq{ <form action="$FullUrl"><p> <input type="hidden" name="action" value="pdf" /> <input type="hidden" name="id" value="$id" /> <input type="submit" value="PDF" /> </p></form> } } }; Let me know if it works for you while I try to figure out whether I need this at all. The position of the PDF button at the very bottom of the page is probably less than ideal. As you can tell, the markup using increased Wikipedia:letter-spacing is messing it all up, which is why I had to fix it. Tags: Comments on 2013-12-17 PDF Button Hi Alex, Long time ago, I wondered about to include a patch like this for my wiki, but eventually most browsers have an utility like this, and others like breadcrumbs, etc.. Is there any advantage?. Thanks. JuanmaMP 2013-12-19 08:52 UTC On my Mac, I don’t need it. Printing to PDF is simple. On Windows, however, I need to install a software PDF printer if I want to do this. Just recently my sister asked me for help converting a Word document she had written. She didn’t manage the installation of the PDF printer software. That reminded me of the fact that for some users, a PDF button might still be necessary. I personally don’t like the “save as HTML” feature of most browsers because it results in HTML + a directory of CSS files, images, ads, scripts, and so on. PDF feels “safe”. AlexSchroeder 2013-12-19 09:03 UTC Add Comment # 2013-12-19 Über die Bildung A note to English readers: If this post showed up in your feed, you should probably switch to a different feed. Some suggestions: RPG only, just English. In der WOZ las ich gerade Wer profitiert von der Uni im Netz? Was als gratis Bildung im Netz für alle begann, wird nun zum Werkzeug für Einsparungen an den Universitäten und damit für die Verschlechterung der nicht kommerzialisierbaren Eigenschaften des Studentenlebens: interessante Komillitonen, Politisierung, Engagement… Auch für die Entwicklungshilfe hat man sich Unis am Netz gewünscht. Bezeichnend fand ich diese Passage: Wie eine Studie der Universität Pennsylvania kürzlich nachwies, gehören achtzig Prozent der KursteilnehmerInnen auf der Plattform Coursera aus Ländern wie China, Indien oder Brasilien zu den Reichsten und am besten Ausgebildeten ihres Landes. Die Zahlen zeigen ausserdem: Regelmässig schliessen weniger als zehn Prozent aller ursprünglich Eingeschriebenen einen MOOC ab – und nur rund die Hälfte davon erfüllt dabei auch die notwendigen Anforderungen für ein Zertifikat Ich bin mir nicht sicher, wie das zu verstehen ist. Auch hier in der Schweiz gehen Kinder von Akademikern tendenziell eher studieren. Auch hier beenden sehr viele ihr Studium frühzeitig, fallen durch, wechseln… Jeder, der schon einmal online etwas organisiert hat, weiss, dass die Hemmschwellen für das nichts tun und das zu spät kommen weg fallen. Grundsätzlich kann man vielleicht nur sagen, dass die Zahlen auf alle Fälle zeigen, dass Unis am Netz für viele Probleme keine Lösung sein können. Es ist schwieriger. Dies muss man wissen, auch wenn die Technologiegläubigkeit und der Optimismus unter den Verantwortlichen nicht erstaunen. Wer sich noch nie mit online Kursen beschäftigt hat, kann sich ja mal die Khan Academy anschauen. “Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more.” Tönt gut! Hierzu habe ich zufälligerweise gerade auf Google den Artikel Comcast, Khan Academy Aim Multimillion-Dollar Partnership At Low-Income Families gesehen. Dort spricht Kahn das selbe Problem an. Die Mittleklasse nutzt die Angebote, die sie der ganzen Welt anbieten will, auch selber, und so bleiben die alten Strukturen erhalten. Vermutlich ist es schwierig, den Armen zu helfen, sich selber zu helfen, ohne ihnen etwas zu geben. Umverteilung, Mindestlöhne, eine Reduktion der Wohlstandsschere scheinen mir langfristig bessere Massnahmen. Was natürlich nichts daran ändert, dass ich Khan Academy ähnlich gut finde wie Wikipedia. Mehr Wissen, mehr Bildung! Dies allerdings als Entwicklungshilfe zu verkaufen, ist – wie gesagt – Technologiegläubigkeit und überzogener Optimismus. Tags: Add Comment # 2013-12-22 Sepulchre of the Clone Notes for a session: As you can see, the magic user was not yet a high-level Vivimancer when I ran the adventure. I had to push him up to 15 if I wanted the clone to be based on the actual spell-casting ability of the dude. At the time I justified the clone with rituals and machinery… You might also notice the notes on snow apes, rocket men and shark men. Indeed, I ran this as the tower in the middle of the lake of The Forgotten Depths. At the time the baboons were “blood monkeys” and the player characters forced one of them to cross the blue room and mash buttons. They didn’t realize that this would revive the clone and once they monkey came back, they killed it. Cruel! They stole the clone’s spell book and still befriended the clone (who didn’t know that it was they that had stolen the book). Once befriended, they decided to “help” him retrieve his spell book and “hunt” for the blood monkeys that must have done it. And strangely enough they soon found it “in the forest”. All is well that ends well, I guess. In my game, the clone wasn’t a Vivimancer but a Polymorpher. The spell selection for Vivimancers is cooler, though. It explains the dense plant life, the mushrooms, the degenerate humans, the intelligent monkeys, the minotaurs, and so on. Resulting second draft: Sepulchre of the Clone.pdf It still needs some proof reading. The idea is to submit this for Gavin Norman’s upcoming Vivimancer Supplement. Today I learned that a one page adventure for A5 paper is harder than it looks. The strange “polymorph other into human for a limited time span” is magic item I wanted to keep. I felt that perhaps a player would be able to take advantage of it, or maybe they’d feel like dismantling it and taking it on to their own ship. They never did, however. Tags: Add Comment # 2013-12-23 Verfolgungsjagd Gerade machte ich mir wieder Gedanken über Verfolgungsjagden für meine Helme & Hellebarden Regeln. Die komische Tabelle bei Labyrinth Lord und Basic D&D kann ich mir nicht merken. Da erinnerte ich mich an Skills bei Apocalypse World und machte mich an eine 2W6 Tabelle, die von Erin Smales Kommentar auf meinen Google+ Artikel inspiriert ist. Verfolgungsjagd: Wer verfolgt wird, muss 2W6 würfeln. Bei 2 haben die Häscher euch überrascht. Bei 3–6 kommt es zum Kampf. Bei 7–9 wähle zwei Punkte, bei 10+ wähle drei Punkte aus der nachfolgenden Liste. • ihr konntet zusammen bleiben • es ging schnell • ihr wisst, wo ihr seid • ihr habt Schilde und Rucksäcke noch Optionale Modifikatoren: Je +1 für die Gejagten, falls es doppelt so viele Verfolger gibt, für eine höhere Bewegungsrate, für Schilde fallen lassen, für Rücksäcke fallen lassen, wenn ein Dieb dabei ist, bei Regen, bei Dunkelheit. Je -1 für die Gejagten, falls ein Elf oder ein Jäger bei den Häschern ist, mit Verwundeten, bei Schnee. Vincent Baker verwendet in Apocalypse World selber ja keine Modifikatoren für die Moves. Ich vermute, das wird bei mir ähnlich sein. Vielleicht sollte ich noch hinzufügen, dass man bei 12+ einfach entkommt. Tags: Comments on 2013-12-23 Verfolgungsjagd Sieht doch gut und merkbar aus – Harald 2013-12-24 12:53 UTC Nach Tim Franzkes Kommentar auf Google+ werde ich das umformulieren: Verfolgungsjagd: Wer verfolgt wird, muss 2W6 würfeln. Bei 2 haben die Häscher euch überrascht. Bei 3–6 kommt es zum Kampf. Bei 7–9 wähle zwei Punkte, bei 10–11 wähle einen Punkt. Bei 12 seid ihr ohne Wenn und Aber entkommen. • ihr wurdet getrennt • es hat lange gedauert • ihr habt euch verirrt • ihr musstet Schilde und Rucksäcke fallen lassen AlexSchroeder 2013-12-24 14:35 UTC Add Comment # 2013-12-28 Looking Back The end of the year is always a good opportunity to look over the year’s posts—RPG posts, to be exact. The year itself started on a sad note: Fight On! is going down. The writing was already on the wall in 2012 and the adventure I had submitted for the last issue is available as a free PDF, Caverns of Slime. But on to positive things! I tried to list the Old School Fanzines I knew, hoping to find a magazine “for me”. It’s weird. What about Fight On! Magazine made me want to contribute? What I remember best are the early levels of The Dismal Depths and the Fomalhaut material by Gabor Lux. I still use my classic D&D character generator, for non-player characters and for character sheets on my Campaign Wiki. Some examples: Waran and Rinathil using ACKS, Stefan and Garo (this one belongs to my wife) using B/X D&D, Sir Guy (this one by a former player) and Sir Sewolt (another one belonging to my wife) using Pendragon. Next session we will be assaulting the Barrowmaze using a party generated by drawing from a pile of random first level characters generated by this character generator. Electronic vat men! I was looking for an alternative to creating a hex map using Inkscape (something I have been doing for my Sea of the Five Winds campaign) and so I wrote Text Mapper. I wrote a post comparing the two methods. To be honest, however, I haven’t been using it. My Sea of the Five Winds campaigns already has a map, my Pendragon game uses the maps from the book, DM Florian uses Hexographer, DM Harald uses the maps from the book. It felt weird to return to Red Hand of Doom. I was running it for the kids and using Labyrinth Lord instead of D&D 3.5. I enjoyed it very much. We skipped two chapters, though: no fighting the druid lich and no confronting of Tiamat’s avatar at the end. We used the mass combat rules from JB’s B/X Companion. I wrote two one page adventures in 2013 that are based on my actual prep notes, To Rob A Witch and Sepulchre of the Clone. That reminds me. Last spring I ran the One Page Dungeon Contest 2013 and all the posts are tagged 1PDC. I’m currently not planning to run the contest in 2014. Do you know anybody who would like to do it? I wrote How To Run A Contest to help you get started. I also discontinued the Old School RPG Planet. If you’re willing to do the actual leg work of asking people to submit their sites, talking to bloggers, answering questions, I could set it up again. I am willing to handle the technical aspects of it. I just don’t want to deal with angry dudes on the Internet. But on to more positive things again! I still use my Spell Book Notation. That’s because I use a strict reading of the magic system such that elves and magic-users have a repertoire equivalent to their spells per day (to use the Adventure Conqueror King System terminology) and neither elves nor magic-users can copy spells from scrolls and spell books. In my campaign this means that all the elves of a particular elf settlement will have a subset of the spells available to the elf lord. What I do is this: I write down the elf lord’s spell book using the notation above and whenever we meet a minion of a particular level, it’s easy to figure out which spells they have available by looking at the second column. In a similar vein, I wrote about using 1d20 instead of 2d6 for dungeon stocking. As it turns out, however, I do this so rarely that I keep forgetting about it. I just don’t run enough dungeon adventures. Still in the same vein, I was also comparing old school dungeon stocking to other methods of adventure location creation and found the traditional way of doing things to be very quick and the result just as good as the new ways. I’m a gaming traditionalist at heart, I guess. I thought about using skills inspired by Apocalypse World in my games but ended up not doing it. DM Harald does it in his campaign, but I remain sceptical. I wrote a bit about running the game. There was a post on my session preparation process for old school games, how to run Fate, how to run settlements in sandbox campaigns, how to let players introduce facts into your traditional campaign, how to use treasure, group initiative and when to roll. In addition to that, I quoted a Google+ comment by Ian Borchardt on wilderness encounters and a comment by Kevin Crawford on urban campaigns. And with that, here’s to the blogs, conversations with strangers on the Internet, and freedom, justice and peace for us all. Tags: Comments on 2013-12-28 Looking Back Suffice to say I’m not using skills based on Apocalypse World per se, but based on reaction rolls plus a modicum of GM arbitration. It looks somewhat like AW, but I don’t feel like I need a system as rigid. – Harald 2013-12-29 15:30 UTC Add Comment # 2013-12-31 Scanner Again We decided to switch from the Apple Mac Mini running Mac OS 10.6.8 (since it’s such an old machine it cannot be upgraded) to Claudia’s 13” Powerbook running the latest Mac OS 10.9.1. And, as always, the old CanoScan LiDE 25 doesn’t work. The latest drivers are for Mac OS 10.6 – yay. I found a blog post, fixed: use unsupported scanner in OSX 10.9 Mavericks, which had me install some promising libs. But it didn’t work. The Printer & Scanner preference pane never found the old scanner, and on the command line I didn’t get the scanner listed. I decided to give Homebrew a try and installed sane-backends. This required a painful and slow uninstalling of libusb and the sane-backends installed by the promising libs I had just installed. Using the sane-backends installed by Homebrew, I can now scan from the command line. Oh well, better than nothing. alex@Megabombus:~$ scanimage -L device plustek:libusb:004:002' is a Canon CanoScan LiDE25 flatbed scanner Trace/BPT trap: 5 alex@Megabombus:~$scanimage --device=plustek:libusb:004:002 --format=tiff --mode=color --resolution=300 -l 0 -t 0 -x 215 -y 297 > Desktop/scan.tiff Killed: 9 alex@Megabombus:~$ open Desktop/scan.tiff ` It works. Since the scanner is not supported by Image Capture, I can’t use it to scan from Gimp. The TWAIN-SANE-Interface and the SANE-Preference-Pane I installed appear to be useless. Oh well. And now that I have scanned the picture I just need to reinstall Tex Live 2013 since the migration appears to have failed. Tags: The latest SANE binaries provided on the site work, but still do not result in a functional Image Capture.. – Anonymous 2014-01-23 01:06 UTC You can use xsane from Homebrew, it’s a graphical frontend for SANE. – Anonymous 2014-03-22 20:13 UTC Wow, xsane works! At first I was confused because I got a low-resolution image every time. Then I saw the menu item “show resolution list” and managed to switch to 300 dpi. Thanks for the pointer! AlexSchroeder 2014-03-30 21:49 UTC # 2013-12 Book Club When: 18 December, 19:30 – RSVP on Meetup (optional ;)) Where: Bistro Lochergut (tram 2+3 ‘Lochergut’) Description by Amazon: When Tom Courtenay left his home in Hull to study in London his mother Annie wrote him letters every week. In them she would observe the world on her doorstep. A world of second-hand shoes and pawnshops, where all the men worked “on Dock” and Saturday nights were spent down the Club. It was a world in which Annie often felt misplaced. Having always longed to write, the letters to her son gave Annie a creative means of escape. “It’s after tea now, your father is examining the bath, I’m awaiting Ann and outside it’s India”. Like his mother, the young Courtenay also felt he was supposed to be elsewhere. Unlike his mother, he was given the opportunity to educate himself and chase his dream. In Dear Tom: Letters From Home Courtenay intersperses recollections of his days as a student actor in the early 1960s with his mother’s engaging and enchanting correspondence. Raw but real, her prose not only paints a graphic and gritty picture of everyday drudge, it displays an inquisitive insight into a life that denied a fishwife her dreams. In a world where working-class women learnt to make do, Annie felt at odds with her artistic aspirations. “Just lately I have had the feeling that I am more than one me. It is very strange. There’s the me that goes careering off writing, thinking, Then there is the ordinary me that mocks the writing me and thinks she is silly and a boring fool”. After his mother’s untimely death, her letters became Tom Courtenay’s most treasured possessions. Dear Tom: Letters From Home is a memoir of a mother’s love that pays posthumous homage to a creative spirit stifled by circumstance. “What magic if, after all these years, people read her letters and are affected by them”, writes Courtenay. It would be impossible not to be. A beautiful book that won’t fail to touch. – Christopher Kelly – This text refers to an out of print or unavailable edition of this title. First suggested: May 2013 (Richie) Supporter(s): Richie, Rene, Dani
2014-07-24 08:42:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2584266662597656, "perplexity": 3613.1650544688755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00124-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/124468-find-area-shaded-circles.html
# Math Help - find the area shaded by the circles 1. ## find the area shaded by the circles find the area shared by the circles r=2cos(theta) and r=2sin(theta). i know the general formula for finding the area, but i don't know which is the outside one, and which is the inside circle. but i've tried both ways and am still not getting the right answer. so let's just say i'll try it with integral .5(4cos(x)^2-.5(4sin(x)^2) i can factor out a 2, giving me cos(x)^2-sin(x)^2. the integral of that, i believe, can be expressed as 4sin(2x). now i figured the limits of integration were from 0 to pi/4, because those are the two places the circles intersect. so evaluating there, i would get 4-0=4. but i've been told the answer is pi/2 -1. so where am i going wrong, because i'm way off. 2. Originally Posted by isuckatcalc find the area shared by the circles r=2cos(theta) and r=2sin(theta). i know the general formula for finding the area, but i don't know which is the outside one, and which is the inside circle. but i've tried both ways and am still not getting the right answer. so let's just say i'll try it with integral .5(4cos(x)^2-.5(4sin(x)^2) i can factor out a 2, giving me cos(x)^2-sin(x)^2. the integral of that, i believe, can be expressed as 4sin(2x). now i figured the limits of integration were from 0 to pi/4, because those are the two places the circles intersect. so evaluating there, i would get 4-0=4. but i've been told the answer is pi/2 -1. so where am i going wrong, because i'm way off. use symmetry ... $A = 2 \int_0^{\frac{\pi}{4}} \frac{(2\sin{t})^2}{2} \, dt $ $A = 4 \int_0^{\frac{\pi}{4}} \sin^2{t} \, dt$ $A = 4 \int_0^{\frac{\pi}{4}} \frac{1-\cos(2t)}{2} \, dt$ $A = 2 \int_0^{\frac{\pi}{4}} 1-\cos(2t) \, dt$ finish 3. ok i get it now, i was trying to use both r=2cos(x) and r=2sin(x). how does one know when to use only one or when to subtract one from the other? since there were two graphs it seemed obvious to me that the latter formula needed to be used, but apparently it didn't. also, how do you know to go with r=2sin(x) instead of r=2cos(x)? thanks for the help. 4. Originally Posted by isuckatcalc ok i get it now, i was trying to use both r=2cos(x) and r=2sin(x). how does one know when to use only one or when to subtract one from the other? since there were two graphs it seemed obvious to me that the latter formula needed to be used, but apparently it didn't. also, how do you know to go with r=2sin(x) instead of r=2cos(x)? thanks for the help. You have to look at the graph and the region area you want to find. LOOK for symmetry and take advantage of it. I could have went with cosine ... $A = 2 \int_{\frac{\pi}{4}}^{\frac{\pi}{2}} \frac{(2\cos{\theta})^2}{2} \, d\theta$ 5. Converting to Cartesian coordinates, $x= r cos(\theta)$ and $y= r sin(\theta)$ so $r= 2cos(\theta)$ or $r^2= 2 rcos(\theta)$ and so $x^2+ y^2= 2 x$. Then $x^2- 2x+ y^2= 0$ and, completing the square, $x^2- 2x+ 1+ y^2= (x- 1)^2+ y^2= 1$. That is the circle with center at (1, 0) and radius 1. It is tangent to the y-axis at the origin. Similarly, $r= 2 sin(\theta)$ becomes $r^2= 2 r sin(\theta)$ or $x^2+ y^2= y$ giving the circle with center at (0,1) and radius 1. It is tangent to the x-axis at the origin. There is no "inside" or "outside". They overlap on the first quadrant, having x=y or $\theta= \pi/4$ as the mid line. That should lead you to integrate $r= 2 sin(\theta)$ from $\theta= 0$ to $\pi/4$ and then double to get the entire area.
2015-04-18 20:33:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274532198905945, "perplexity": 249.59392408168503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636104.0/warc/CC-MAIN-20150417045716-00129-ip-10-235-10-82.ec2.internal.warc.gz"}
https://eepower.com/news/epa-leed-certification-fails-to-increase-energy-efficiency/
News # EPA: LEED Certification Fails to Increase Energy Efficiency March 02, 2014 by Jeff Shepard Today, LEED Exposed, a project of the Environmental Policy Alliance (EPA), released research showing that large privately-owned buildings in Washington D.C. certified under the U.S. Green Building Council's (USGBC) Leadership in Energy and Environmental Design (LEED) standards, actually use more energy than uncertified buildings. Despite having the highest number of buildings in the country certified under LEED, Washington D.C. buildings are actually less energy efficient than the national average. LEED Exposed determined energy consumption by comparing the weather-normalized, source energy use intensity, or EUI (a unit of measurement that represents the energy consumed by a building relative to its size), for both buildings certified by the USGBC as "green" and those that have not gone through the USGBC's expensive permitting process. For LEED-certified buildings, their EUI was 205, compared to 199 for non-certified buildings. Ironically, USGBC's headquarters (which has achieved the highest level of LEED certification) is even worse at 236. "This latest data release only confirms what we already knew: LEED certification is little more than a fancy plaque displayed by these 'green' buildings," said Anastasia Swearingen, research analyst for the Environmental Policy Alliance. "Previous analyses of energy use by LEED-certified buildings have consistently shown that LEED ratings have no bearing on actual energy efficiency." These findings are significant as D.C. is one of several major localities to mandate the use of LEED in construction of public buildings and was the first city to require all buildings (public and private) to disclose energy usage. An analysis by The Washington Examiner earlier this year of D.C. government buildings found that many of the District's LEED-certified buildings were the least energy-efficient of all comparable buildings. The city's Department of Environment (DOE) recognizes the problems with using LEED standards. In a report accompanying the release of data, the DOE says concerns and questions regarding the use of LEED include, "The dependence on a third-party organization, over which the government has no oversight, to set the District's green building standards," and, "The perception that application costs associated with LEED are significant." "It's troubling that to achieve the laudable goal of promoting greater energy efficiency, the District relies on the use of a third-party rating system that doesn't require actual proof of energy efficiency to earn certification," continued Swearingen. "Even more alarming is the fact that the city is collecting millions of dollars in permit fees to administer this inefficient program." In fiscal year 2013, D.C. collected over $1.6 million in green building fees, and the District has collected over$5.2 million in fees since 2010. Despite the expense, D.C. lags behind the rest of the country in energy efficient office buildings. The median EUI nationwide for office buildings is 148—DC's is 214, or 44% more than the national median. Swearingen concluded: "It's time for D.C. to ditch LEED and move towards a certification system that promises real improvements in energy efficiency."
2021-12-05 19:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30551406741142273, "perplexity": 6535.746958978186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00462.warc.gz"}
https://www2.cms.math.ca/Events/summer22/abs/dsa
2022 CMS Summer Meeting St. John's, June 3 - 6, 2022 Dynamical systems and applications Org: Isam Al-Darabsah (University of Manitoba) and Yuan Yuan (Memorial University) [PDF] SKYE DORE-HALL, University of Victoria Ramp Function Approximations of Michaelis-Menten Functions in a Model of Plant Metabolism  [PDF] Adams, Ehlting, and Edwards showed that in a model of plant phenylalanine metabolism following Michaelis-Menten kinetics, there are two mechanisms by which primary metabolism can be prioritized over secondary metabolism when input is low: the Precursor Shutoff Valve (PSV) and threshold separation. Analysis of the model was made difficult due to the presence of the Michaelis-Menten terms; hence, it is worth considering whether linear approximations of these terms can be used to both simplify the model and keep its qualitative behaviour intact. In this talk, we will introduce piecewise approximations of Michaelis-Menten functions called ramp functions. We will show that when the Michaelis-Menten terms are replaced by ramp functions in the model, the PSV is completely effective when it comes to the prioritization of primary metabolism under low input conditions, while threshold separation is effective when the PSV is absent, but only if the threshold constant of the secondary metabolic pathway is sufficiently larger than that of the primary pathway. JUDE KONG, York University Phytoplankton competition for nutrients and light in a stratified lake: a mathematical model connecting epilimnion & hypolimnion  [PDF] In this talk, I will present several mathematical models describing the vertical distribution of phytoplankton in the water column. In particular, I will introduce a new mathematical model connecting epilimnion and hypolimnion to describe the growth of phytoplankton limited by nutrients and light in a stratified lake. Stratification separates the lake with a horizontal plane called thermocline into two zones: epilimnion and hypolimnion. The epilimnion is the upper zone which is warm (lighter) and well-mixed, and the hypolimnion is the bottom colder zone which is usually dark and relatively undisturbed. The growth of phytoplankton in the water column depends on two essential resources: nutrients and light. The critical thresholds for the settling speed of phytoplankton cells in the thermocline and the loss rate of phytoplankton are established, which determine the survival or extirpation of phytoplankton in epilimnion and hypolimnion. This is joint work with Jimin Zhang (Heilongjiang University), Junping Shi (William & Mary) and Hao Wang (University of Alberta). MICHAEL Y LI, University of Alberta Accurate Long-Term Projections of COVID-19 Epidemics by Incorporating Human Behaviours  [PDF] Many lessons can be learned during the COVID-19 pandemic to improve epidemic modeling in order to produce accurate long-term model projections of epidemics. In the talk, I will show that why the final-size formula can explain the over-projections made by many epidemic models at the beginning of the pandemic, and how our understanding of real epidemics can be improved by examining all important drivers that collectively determine when an epidemic peak and terminate, and how human behaviours can be incorporated into the standard epidemic models to produce accurate and reliable long-term model projections. JUNLING MA, University of Victoria An SIR Contact Tracing Model for Randomly Mixed Populations  [PDF] Contact tracing is an important intervention measure to control infectious diseases. We present a new approach that borrows the edge dynamics idea from network models to track contacts included in a compartmental SIR model for an epidemic spreading in a randomly mixed population. Unlike network models, our approach does not require statistical information of the contact network, data that are usually not readily available. The model resulting from this new approach allows us to study the effect of contact tracing and isolation of diagnosed patients on the control reproduction number and number of infected individuals. We estimate the effects of tracing coverage and capacity on the effectiveness of contact tracing. Our approach can be extended to more realistic models that incorporate latent and asymptomatic compartments. FELICIA MAGPANTAY, Queen's University A quantification of transient dynamics  [PDF] The stability of equilibria and asymptotic behaviors of trajectories are often the primary focuses of mathematical modeling. However, many interesting phenomena that we would like to model, such as the honeymoon period'' of a disease after the onset of mass vaccination programs, are transient dynamics. Honeymoon periods can last for decades and can be important public health considerations. In many fields of science, especially in ecology, there is growing interest in a systematic study of transient dynamics. In this work we attempt to provide a technical definition of long transient dynamics'' such as the honeymoon period and explain how these behaviors arise in systems of ordinary differential equations. We define a transient center, a point in state space that causes long transient behaviors, and derive some of its properties. In the end, we define reachable transient centers, which are transient centers that can be reached from initializations that do not need to be near the transient center. YOUNGMIN PARK, University of Manitoba Models of Vimentin Organization Under Actin Retrograde Flow  [PDF] Intermediate filaments are elements of the cytoskeleton where their organization determines their functions in cells. In this study, we observe and model the movement of GFP-labeled vimentin fibers after preventing microtubule polymerization with nocodazole to inhibit microtubule related transport driven by molecular motors. Hence, in our data, vimentin is only subjected to actin-driven transport. To model this phenomenon, we assume that vimentin may exist in two states, mobile and immobile, and may switch between the states at unknown rates. In addition, we assume that mobile vimentin may advect from the cell plasma membrane to the nuclear envelope because of actin retrograde flow. We introduce several biologically realistic models using these assumptions. For each model, we use dual annealing to find the best parameter sets resulting in a solution that most closely matches the experimental data. Then the best candidate model is identified using the Akaike Information Criterion. Using the best candidate model, we reconstruct the spatially-dependent profile of the actin retrograde flow, and discuss the biological implications of our results. Work with S. Etienne-Manneville (Institut Pasteur, Paris), C. Leduc (IJM, Paris) and S. Portet (University of Manitoba) STACEY SMITH?, The University of Ottawa Is a COVID-19 vaccine likely to make things worse?  [PDF] In order to limit the disease burden and economic costs associated with the COVID-19 pandemic, it is important to understand how effective and widely distributed a vaccine must be in order to have a beneficial impact on public health. To evaluate the potential effect of a vaccine, we developed risk equations for the daily risk of COVID-19 infection both currently and after a vaccine becomes available. Our risk equations account for the basic transmission probability of COVID-19 ($\beta$) and the lowered risk due to various protection options: physical distancing; face coverings such as masks, goggles, face shields or other medical equipment; handwashing; and vaccination. We found that the outcome depends significantly on the degree of vaccine uptake: if uptake is higher than 80\%, then the daily risk can be cut by 50\% or more. However, if less than 40\% of people get vaccinated and other protection options are abandoned --- as may well happen in the wake of a COVID-19 vaccine --- then introducing even an excellent vaccine will produce a worse outcome than our current situation. It is thus critical that effective education strategies are employed in tandem with vaccine rollout. JONATHAN TOT, Dalhousie University On the Equilibria and Bifurcations of a Rotating Double Pendulum  [PDF] The double pendulum, a simple system of classical mechanics, is widely studied as an example of, and testbed for, chaotic dynamics. In 2016, Maiti et al. [Phys.Lett.A 380 p.408-412] studied a generalization of the simple double pendulum with equal point-masses at equal lengths, to a rotating double pendulum, fixed to a coordinate system uniformly rotating about the vertical. In this work, we have studied a considerable generalization of the double pendulum, constructed from physical pendula, and ask what equilibrium configurations exist for the system across a comparatively large parameters space, as well as what bifurcations occur in those equilibria. Elimination algorithms are employed to reduce systems of polynomial equations, which allows for equilibria to be visualized, and also to demonstrate which models within the parameter space exhibit bifurcations. We find the DixonEDF algorithm for the Dixon resultant, written in the computer algebra system Fermat, to be capable to complete the computation for the challenging system of equations that represents bifurcation, while attempts with other algorithms were terminated after several hours. CUIPING WANG, Memorial University Dynamic Analysis of Cancer-Immune System with Therapy and Delay  [PDF] In this talk, we propose a two-dimensional differential system with delay for human immunological system describing the interaction of effector cells and cancer cells. We investigate the existence of equilibria, in detail, with respect to the system parameters, especially with the action delay and the therapy rate, and discuss the stability of these equilibrium points, theoretically and numerically. YAHUI WANG, Lanzhou University, Memorial University of Newfoundland Propagation Direction of Traveling Waves to a Competitive Integrodifference System with Bistable Nonlinearity  [PDF] Traveling wave propagation is a significant phenomenon observed in population biology. Due to the occurrence of nonlocal effect in integrodifference systems, a deep understanding of the wavefront in the propagation direction is challenging. In this paper, we study the sign of wave speed for bistable traveling waves to a two-species competitive integrodifference system that biologically models the dynamics of two species in competition for a common resource. By a proper choice of the kernel functions, we transfer our model into a coupled functional differential system and shed a new light on how to determine the wave speed sign. Sufficient conditions with symmetry are obtained on the propagation directions of the wavefronts. This symmetry is further verified in the final analysis and numerical simulations are provided to illustrate our theoretical results. This talk is based on a joint work with Drs. Guo Lin and Chunhua Ou. GAIL S. K. WOLKOWICZ, McMaster University The Augmented Phase-Plane for Analyzing Discrete Planar Models  [PDF] After showing why phase plane analysis has not been particularly useful for analyzing discrete planar models as it is for planar ordinary differential equations, it will be shown how to augment the phase plane by not only considering the direction field and the nullclines, but by also including curves that we call the next iterate root-curves associated with the nullclines. These root curves determine on which side of the associated nullcline the next iterate lies. We demonstrate this method on e.g., a predator-prey model and a well-known Lotka-Volterra type discrete model. This provides an elementary method to obtain some global properties of the dynamics. This is joint work with Sabrina Streipert. PEI YUAN, York University Dynamical modelling and complex dynamics for the control of pest leafhopper with generalist predatory mite in tea plantations  [PDF] The tea green leafhopper Empoasca onukii Matsuda (Hemiptera: Cicadellidae) is one of the important insect pests threatening the tea production. Both nymph and adult of E. onukii suck the tea buds, leaves, and shoots and make wounds in tea plants, which finally leads to the symptom from blade curling, bronzing, shriveling, necrosis to stand loss, even severe hopperburn, affecting the quality and yield of the tea. The pesticides were the commonly applied which caused the undesirable pesticide residues on brewed tea. A potential biological control agent, the mite Anystis baccarum(L.) is a significant predator of the leafhopper in various agricultural systems. Based on the field experiment and data, we propose a predator-prey model with a generalist predator and aim to understand the dynamics of leafhopper pest E. onukii and predatory mite A. baccarum for the purpose of finding a plausible control mechanism. In this talk, I will present the bifurcations and complex dynamics of the model, which include saddle-node bifurcation, Hopf bifurcation, Bogdanov-Takens bifurcation, and even bifurcation of nilpotent singularities of codimension 3. In the end, I will present the bifurcation diagrams to explain and interpret the complex dynamics of the model. This is a joint work with Lilin Chen, Mingsheng You and Huaiping Zhu. KEXUE ZHANG, Queen's University Hybrid Event-Triggered Stabilization of Time-Delay Systems  [PDF] Event-triggered control strategies allow for updating the control inputs when an event, triggered by a certain event-triggering rule, occurs. The unpredictable sequence of event times is determined explicitly by the event-triggering rule. The event-triggering mechanism has the advantage of reducing the number of control input updates while still guaranteeing the underlying desired performance. Due to the advantages of event-triggered control in efficiency improvements and the significance of time-delay systems in modeling real-world phenomena, the study of event-triggered control strategies for time-delay systems is of great importance. There are two main challenges in this study. First, the control algorithms for delay-free systems cannot be applied to time-delay systems directly. Another challenge, which is also the main difficulty of this research, is to exclude Zeno behavior from the closed-loop control systems. In this talk, we will introduce an event-triggered control algorithm that is based on the Lyapunov-Razumikhin technique. However, Zeno behavior can be easily examined in a class of linear time-delay systems. Therefore, a hybrid event-triggered control and impulsive control mechanism will be proposed to rule out Zeno behavior. This is joint work with Bahman Gharesifard.
2022-06-29 16:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43286290764808655, "perplexity": 1539.839192631105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00600.warc.gz"}
https://exampur.com/short-quiz/12613/
# BIHAR CGL MATHS QUIZ Attempt now to get your rank among 37 students! ## Question 1: Select the most appropriate option to solve the equation. $105 \div 15 \times\{(38-8) \div 5\} \div 3=$ ## Question 2: $30 \div(20-15 \div 3 \times 8)=?$ ## Question 3: What will come in the place of the question mark ‘?’ in the following question? 60% of 90 + 12.5% × 64 – 39 + 16 = ? ## Question 4: What will come in the place of the question mark ‘?’ in the following question? $\sqrt{(89+32)}+5^{3}-(49 \times 2)=?$ ## Question 5: What will come in the place of the question mark ‘?’ in the following question? $40 \%$ of $60+16.66 \% \times 54-20+13=?$ ## Question 6: Simplify the following expression. $\frac{2}{5}-\left[1 \frac{1}{3}+\left(1 \frac{1}{4}-2 \frac{1}{3}\right)\right] \div 2 \frac{2}{3} \times \frac{3}{5}+\frac{2}{5}$ ## Question 7: The value of $(5 \div 8)$ of $(4 \div 5)$ of $25\left(15^2-13^2\right)$ is: ## Question 8: The value of $\{5-5 \div(10-12) \times 8+9\} \times 3+5+5 \times 5 \div 5$ of 5 is: ## Question 9: The value of $3 \times 7+5-6 \div 3-9+45 \div 5 \times 4-45$ is ## Question 10: If $\left(\frac{2}{7}\right)^{-3} \times\left(\frac{4}{49}\right)^6=\left(\frac{2}{7}\right)^{2 m-1}$, then what is the value of $m$ ?
2023-03-24 13:31:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822113156318665, "perplexity": 1640.2355745263078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00259.warc.gz"}
http://media.aau.dk/null_space_pursuits/2012/07/
July 2012 Archives From 1976, "Daddy Cool" by Boney M. is a certifiable classic Disco tune. Below are Boney M. singing and dancing "Daddy Cool." And I think Bobby Ferrell's dancing might be a perfect reflection of cocaine use. To me this is nearly perfect Disco: a square four on the floor with that typical open hi-hat between beats (this time with a flange!), simple yet memorable figures for strings and saxes, bass, female voices, and don't forget the sexual content! The only thing really wrong with it is that the track is no longer than 4 minutes. (And I would like a funkier bass-line.) Now consider this 1993 remake by a pop group in Hungary. Although note for note they are just about the same, to me from the get go the latter is a sequenced and sterile version missing the essential hihats of the original. But is it so far away that I would not classify it as Disco? Lovely Moments in a Few Music Recordings | 1 Comment Here is Faron Young in 1956 covering Don Gibson's "Sweet Dreams" for him and his eyebrows to get a ticket for the Checkerboard Showboat to continue "following those girls." Now, here are the Pioneers covering the song 12 years later in a recording released 1968 (is that a ukulele I hear?). When I listen to this version, my own eyebrows raise as if to reach across space and time some 44 years to bring the vocals into key. It is precisely because of this, in our autotune saturated world, that I really like this recording. Here is Jim Reeves in 1959 singing "He'll Have To Go", this time without the pressure of enduring the Checkerboard Showboat. Now, here is David Isaacs covering the song 10 years later in a recording released 1969. Those back up singers, with their bizarre harmonies intentional or not, are precisely why I can't stand to listen to this recording at a low volume. My wife rolls her eyes when I play it loud, just as I want to always hear it. Now, from his 1974 masterful record "Rhapsody in White," here is "Love's Theme" by Barry White performed by The Love Unlimited Orchestra. Aside from the rich orchestration combining two rhythm guitars, roiling piano, lush strings, sweeping harp, and the drums and bass I could listen to four hours alone, I love this particular recording for a few reasons. First, popular music these days that combine classical elements like strings, is essentially boring. I am looking at you The Verve, and Guns 'N Roses. Second, around 1m45, when the horns take in the bridge, there is a wonderful maybe-flub by one of the players. Then from 3m07 to 3m11 the piano loses it, before nearly everything is taken away by a quite artificial but delicious rapid fade out at 3m16, leaving naked the rhythm guitars shivering alone with the bass and drums. Disco in Bulgaria Disco --- the music, the dress, the life style --- was a phenomenon that has a clear beginning, peak, and denouement, at least in the USA and the UK. Contrary to the hundreds of "Now that is what I call Disco" compilations available, Disco made inroads to many other places in the world --- places other than Western Europe and Scandinavia (ABBA). I have been communicating with a colleague (NN) who is an expert in Bulgarian popular music, and he has graciously given me permission to quote our conversation. I indent his notes below. Terran Lane provides an excellent discourse on why he is leaving academia in the USA for a position at Google. I immigrated to Europe from the USA right after finishing my PhD in 2009, and will remain here because of many reasons, some of which Lane states quite well. In particular, I continue to be repelled by the hyper-religious anti-science climate of public and political discourse in the USA, as well as the bold-faced contradiction of the USA being a "moral authority" while bowing to the "authority of money." I probably penned twice as many words on those subjects during my PhD as there are in my dissertation; but since coming to Europe, I have not felt the need to fight such things. Religion, science, and intellectualism are treated quite differently in Europe, which in no small way has to do with the blood it has shed due to innumerable arguments over woo. To be sure, Europe has its problems too. The only time I have really missed the USA was when I experienced a particularly overt display of racism. Many parts of Europe have a ways to go to reach the multicultural experience of the USA --- which I find ironic since distances here are so small. Anyhow, I commend Lane on writing such a nice piece. It is time to remix en masse This is one of the best mash-ups I have seen. We need so much more of this. If you are wondering, that mad piano-playing dancing man is Neil Sedaka singing "Bad Blood": The second singer is Teddy Pendergrass singing "Close the door": The man in the beginning is Bob McGrath from Sesame Street. Seeing him takes me back to my childhood when I was an avid watcher. :) Plagiarism It started when I read the first sentence of the introduction of D. P. L. and K. Surresh, "An optimized feature set for music genre classification based on Support Vector Machine", in Proc. Recent Advances in Intelligent Computational Systems, Sep. 2011. They write: Music is now so readily accessible in digital form that personal collections can easily exceed the practical limits on the time we have to listen to them: ten thousand music tracks on a personal music device have a total duration of approximately 30 days of continuous audio. Then I googled "Music is now so readily accessible in digital form", and look at this! The top first hit is from an article in press: Angelina Tzacheva, Dirk Schlingmann, Keith Bell, "Automatic Detection of Emotions with Music Files", Int. J. Social Network Mining, in press 2012. I can't read the entire article; but the first two sentences of the abstract are: The amount of music files available on the Internet is constantly growing, as well as the access to recordings. Music is now so readily accessible in digital form that personal collections can easily exceed the practical limits of the time we have to listen to them. The source of this text, however, is in the third search result: M. Casey, R. Veltkamp, M. Goto, M. Leman, C. Rhodes and M. Slaney, "Content-based Music Information Retrieval: Current Directions and Future Challenges", Proc. IEEE, vol. 96, no. 4, pp. 668-696, Apr. 2008. The first sentence of their introduction is an exact match to the text in L. and Suresh: Music is now so readily accessible in digital form that personal collections can easily exceed the practical limits on the time we have to listen to them: ten thousand music tracks on a personal music device have a total duration of approximately 30 days of continuous audio. I don't care to search for other examples of plagiarism in this publication, or that of Tzacheva et al. Even finding one lifted sentence in a work tells me how much time I should spend with it. Better for me to just write a blog post about it, and then send a complaint to IEEE. Maybe the fourth time will be the charm If you are coming from Nuit Blanche, or even if you aren't, I want to make clear that my aims with this post are not to complain about peer review, or to claim I have been wronged, or to poke at what I really believe is good work from nine anonymous and competent reviewers. Peer review is an awesome privilege; and I try to take it as seriously as I can when I review. With the ICASSP deadline more than a few months away, I have time to put more thought into the next revision, and to address all the comments that have been raised. Just to be clear on my overall intentions, I use my blog as a hypertext record of my research and ideas, a public account to the Danish tax payer, and a personal experiment in presenting in near real-time how my research unfolds. (When I collaborate with others on work though, I do not discuss it here unless we all agree it is fine to do... which is why it has been quiet here for a while.) Research is rough, and yet so much fun. It is not as clean as the final journal paper appears --- which took me time to appreciate during my PhD. What follows is a synopsis of the submission and review process of particular work of mine I have been trying to publish for over a year. Since I will submit it again, I am reviewing all of the helpful comments by the nine reviewers so that I make sure the fourth time is the charm! First, in March 31, 2011, I submitted it to IEEE Signal Processing Letters. My submission was a summary of the outcomes of several experiment I performed in which I measure signal recovery performance by eight different algorithms from compressive measurements of sparse vectors distributed in seven different ways. I only considered noiseless measurements, and all my experiments were run with an ambient dimension of 400. I tested a range of problem sparsities and indeterminaces, and looked at transitions from high to low probability of recovery. The "big" result, I assumed, is that things change dramatically based on how the sparse vector is distributed. All of a sudden, my world was changed when I saw error constrained $$\ell_1$$ minimization is sometimes not the best. It can be significantly beaten by something as simple as a greedy approach. (I also proposed an ensemble approach, where the best solution is selected from those produced by several algorithms. I wanted to see what how much better all the results could be.) On June 4, 2011, my submission was rejected. In general, the reviews were good: Contest to Classify Insect Sounds This looks like a really rewarding thing to solve, but after listening to the sounds myself while viewing the labels, I am not sure it is so solvable with audio features alone. Still, I might try a little something to see what happens. Bob L. Sturm, Associate Professor Audio Analysis Lab Aalborg University Copenhagen A.C. Meyers Vænge 15 DK-2450 Copenahgen SV, Denmark Email: bst_at_create.aau.dk
2017-05-28 14:49:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18310509622097015, "perplexity": 2065.641452082444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00369.warc.gz"}
https://moultano.wordpress.com/2014/08/04/a-skill-ranking-system-for-natural-selection-2/
# A Skill Ranking System for Natural Selection 2 Natural Selection 2 is my favorite game, without compare. I really can’t say enough good things about it, go watch the last big tournament if you want a taste. It combines everything that’s great about the FPS and RTS genres, and requires very high levels of FPS skill, RTS tactics, and team coordination to play well. It’s the only video game I’ve played where a person’s leadership skills can mean the difference between victory and defeat. When someone leads their team to victory in NS2, it doesn’t mean they were the one with the most kills. It’s a very rewarding social experience, with all the highs and lows of a team sport, but you get to shoot aliens. It is however, a difficult game to learn, and an even more difficult game to play well. Because of this, games in public servers often have unbalanced teams, leading to dissatisfying games for both sides. To help fix this problem, I designed a player ranking system that was implemented in the most recent patch, Build 267. It’s based on skill systems like ELO but tweaked for the unique properties of NS2. # Overview A useful skill system should be able to predict the outcome of a game using the skills of the players on each team. Rather than trying to figure out which other statistics about a player (kill/death etc) indicate that a player is good, we instead try to infer it the skill levels just from our original objective: predicting the outcome of games. Designing a skill system like this is different from many other statistics problems, because the world changes in response to the system. It’s not enough to predict the outcome of games, you have to be sure that you can still predict the outcome of games even when people know how your system works. On top of that, you have to ensure that your system incentivizes behaviors that are good for the game. # Notation • $p$ is the probability of a victory by team 1. • $G$ is the outcome of the game, 1 if team 1 wins, 0 if team 1 loses. • $s_i$ is the skill of the $i$th player. • $t_i$ is the time spent in the round, an exponential weighting of each second so that being in the round at the beginning counts more than being in the round at the end. Integrals over exponents are easy, so you can implement this as $t_i = a^{-\text{entrance time}} - a^{-\text{exit time}}$. Set $a$ to something like $2^{2/r}$ where $r$ is the median round length. This means that playing until the middle of an average round counts more than playing from the middle to the end, no matter how long the round takes. • $T_i$ is an indicator variable for the team of the $i$th player. Its value is 1 if the $i$thplayer is on team 1, -1 if the $i$th player is on team 2 # The Basic Model To compute the skill values, the first task is to predict the probability of team 1 winning the round as a function of those skill values. $\displaystyle \log \left(\frac{p}{1-p}\right) = \frac{\sum_i T_i t_i s_i}{\sum_i t_i}$ $\displaystyle p = \frac{1}{1 + e^{-\sum_i T_i t_i s_i / \sum_i t_i}}$ The function $\log\left(\frac{p}{1-p}\right)$ maps the range $(0,1)$ to the range $(-\infty,\infty)$ which is how we can relate a probability to a sum of different factors. (It’s a convenient function for this purpose because it’s deeply related to how conditionally independent components combine into a probability. See Naive Bayes, Logistic Regression for more info) We’d like to come up with the values for $s_i$ that cause the model to predict the outcome correctly for as many games as possible. I’m skipping a bunch of steps where we maximize the log likelihood of the data. We’ll be optimizing the model using stochastic gradient descent. A pdf for further reading. Basically, whenever the model is wrong, we figure out the direction in which it was wrong, and we change the model to move it in the opposite direction. $\displaystyle s_i\leftarrow s_i + T_i(G-p)\frac{t_i}{\sum t_i}$ We predict the outcome of the game based on the players’ skill and how long they each played in the game. After the game, we update each player’s skill by the product of what fraction of the game they played, and the difference between our prediction and what actually happened. # Properties of the Basic Model 1. If one team smashes another, but the teams were stacked to make that outcome almost certain, nobody’s skill level changes at the end of the round. Only unexpected victories change skill levels. 2. Nothing you do during the round other than winning the game has any effect on your skill level. This means that there’s no incentive to rack up kills at the end of the game rather than finishing it, or to play skulk instead of gorge, or to go offense rather than build. 3. The effect on your score is determined by the time you spent playing the round rather than your points, with the beginning of the game weighted much higher than the end. This means that it doesn’t harm you to join a game that is already lost, because you’ll have played for a very small fraction of the weighted time spent in the game. 4. If the two teams have a different number of players, this is compensated for automatically by normalizing by the total time across both teams rather than just within a team, and the model predicts that the team with more players will win. 5. Larger games contribute less to each player’s skill level because the presumption is that each player has a smaller effect on the outcome of the game. 6. If you are a highly skilled player on a low-skill server, the best thing you can do for your skill ranking is to join the underdog team, and lead them to victory. # Accounting for imbalance in races, maps, game sizes. The basic model assumes that the only factors that contribute to the outcome of the game are the skill of the players. Given the overall win rate of each race, this is clearly not true. To fix the model to account for this, all we have to change is our original formula for p. $\displaystyle \log \left(\frac{p}{1-p}\right) = \frac{\sum_i T_i t_i s_i}{\sum_i t_i} + F(\text{race})$ We can determine the value of $F$ for team 1’s race using the historical records for the winrate of that race. We then set $F$ to $\log\left(\frac{\text{race win rate}}{1 - \text{race win rate}}\right)$. This ensures that when teams are even, our prediction matches the historical probability of that race winning. This needn’t be merely a function of the race however. It could also be a function of the map, the game size, or any other relevant feature that is independent of the players. All that is necessary to set its value is to measure the historical winrate for the team in that set of circumstances (for instance, aliens on ns2_summit with game size > 16), and put it through the inverse logit function as above. $\displaystyle \log \left(\frac{p}{1-p}\right) = \frac{\sum_i T_i t_i s_i}{\sum_i t_i} +F(\text{race}, \text{map},\text{game size}, ...)$ # Special treatment for commanders. The basic model assumes that commanders are just like any other player, and that they have the same contribution to the outcome of the game as any other player. This isn’t necessarily a bad assumption, I’ve seen many games where an unusually vocal and motivational player other than the commander was able to call plays and lead a team to victory. The much worse assumption however is that the same skill sets apply for being a commander or a player. Players that can assassinate fades and dodge skulks might be useless in the command chair, so it doesn’t make much sense to use the same skill values for both. To fix this, we give each player a separate skill level that indicates their skill at commanding. To distinguish it from the other skill level, we’ll call it $\chi$. To indicate the time spent commanding i’ll use $c$ the same way (and using the same formula) as for $t$ above. $\displaystyle \log \left(\frac{p}{1-p}\right) = \frac{\sum_i T_i t_i s_i}{\sum_i t_i} +\frac{\sum_i T_i c_i \chi_i}{\sum_i c_i} + F(\text{race}, \text{map},\text{game size},...)$ This makes a few questionable assumptions for simplicity. The model will still do useful things even if these assumptions are false, but they do indicate areas where the model won’t perform well. 1. The magnitude of the impact of the commander does not depend on the size of the game. 2. Playing marine commander is equivalent to playing alien commander. 3. A good commander with a bad team vs a bad commander with a good team is expected to be an even match. I suspect this isn’t true because there are few ways that a commander can influence the outcome of the game independently of their team, such that a bad team probably critically handicaps a good commander, and as such it might make sense to use quadratic terms instead. # Faster convergence Rather than restricting ourselves to updating the skill levels once on each game, we can optimize them iteratively using the whole history of games, which will cause them to converge much faster as we acquire more data. To do this however, it is necessary to put a prior on the skill levels of each player so that they are guaranteed to converge even when there is little data for a player. To do this, include a Gamma Distribution in the model for each player’s skill. The gradient of the log likelihood of the gamma distribution is $\frac{k-1}{s_i} - \frac{1}{\theta}$.This makes the update rule for the player as follows: $\displaystyle s_i\leftarrow s_i + \frac{k-1}{s_i} - \frac{1}{\theta} + \sum_{j \in \text{games}}T_{ij}(G_j-p_j)\frac{t_{ij}}{\sum t_{ij}}$ There are two differences between this formula and the update rule above. The first is that rather than just updating the score one game at a time as the games come in, we store the whole history of games the player has played, and iteratively update the player’s skill. Secondly, on each iteration, we add $(k-1)/s$ to the player’s skill, and subtract $1/\theta$ from the player’s skill. This expresses our belief that until we know more about a player, their skill probably isn’t too high or too low. The $k$ and $\theta$ parameters control the distribution of skill levels. As $k$ increases, we become more skeptical of very low skill values. As $\theta$ decreases, we become more skeptical of very high skill values. The mean skill value will be $k\theta$. To run this algorithm, we alternate between updating our prediction for the outcome of each game and updating each player’s skill level based on the new predictions for all of the games they’ve played. # Numerical Details Gradient descent tells you which direction to go, but it doesn’t tell you how far. We have a lot of freedom in choosing how much to update each player’s score after each round to make it satisfying for the player. More precisely, we can add a coefficient $\alpha$ that describes the step size for the updates. $\displaystyle s_i\leftarrow s_i + \alpha T_i(G-p)\frac{t_i}{\sum t_i}$ To pick this, it’s useful to get a sense of what skill values actually mean. If the skill values of two players differ by 2 points, then this causes our model to predict that the better player will beat the worse player 90% of the time. $(1 / (1 + e^{-2}) = 0.88)$ Or rather, that a team composed of clones of the better player will beat a team composed of clones of the worse player 90% of the time. So intuitively, how many fair games must a player win consecutively before we’re comfortable predicting that they will beat the people they are playing against 90% of the time? Suppose the player is playing on 20 player servers, and they are playing fair games. That means the absolute value of the update above will be 0.5/20, or 0.025 each round. With $\alpha$ set to 1, it would take the player 80 consecutive wins on fair games in order to be estimated to be able to beat their old self 90% of the time. This seems excessive! So we should set $\alpha$ to something larger, maybe 8? This gives us a feel for the difference between the players scores, and how quickly we should update them, but we still don’t have a sense for the absolute magnitude of the player scores. How far should they all be from 0? If the teams have the same number of players, than all that matters is the difference between the scores. We can add any constant, and it cancels out. However, it does effect the model when the teams have different numbers of players. The team with more players will be expected to win. This plot shows the relationship between the average value of the player skill, x, the number of players in the game, n, and the prediction for the outcome of a game that is otherwise fair. So if the skill values are close to 0, then we predict the team with more players to have no advantage, they win 50% of the time. If the skill values are all around 40, then we predict the team with more players to win over 90% of the time in a 13 player game. This makes it important that we don’t set the starting absolute value of the skill levels to be too high or too low. To initialize the scores, a reasonable choice is to set them to some multiple of the log of the player’s playtime. # Aside: Why the logit and not erf? This formulation of a skill system differs from many others in that it uses the logistic distribution’s cdf to model win probabilities rather than the gaussian distribution’s cdf. There are two reasons I chose this. 1. NS2 allows for frequent upsets late in the game if one team is particularly cunning, and the other is inattentive. This makes a distribution with fatter tails more appropriate. 2. Using the logit allows us to correct for various factors that cause imbalance in the game without any complicated optimization. Due to the independence assumption, we can just measure winrates empirically in different circumstances, and we don’t have to optimize these coefficients jointly with the player skill. # How do we balance the teams in a real game? NS2 had a feature where players could vote to “Force Even Teams” but it was generally unpopular. The skill values weren’t very accurate, and the strategy used to assign players had some undesirable properties. It used a simple heuristic to assign the players since the general problem is NP-complete. Because it reassigned all players, they would find themselves playing aliens when they wanted to play marines, or vice versa, and would switch back right after the vote.  To fix this, we designed an algorithm that minimizes the number of swaps that occur to reasonably balance the teams. 1. Optimize the team assignments of the players who have not yet joined a team. You can do this using whatever method you wish. 2. Consider each pair of players.  Find the pair where swapping them minimizes the absolute value of the skill difference. Iterate until there is no pair that reduces the absolute value of the skill difference. While this greedy heuristic of swapping only the best pairs one at a time may be a poor solution for coming up with the optimal balanced teams, it is great for keeping as many players as possible happy with the team assignments. Because the swap chosen is the one that minimizes the objective function the most, we can get quickly to a local optima by swapping very few players. ### One comment 1. Gibberish You have made a (good) assumption that there are at least two skills that a player may have (FPS vs Commander). The problem is I don’t think two skills are sufficient to model player ability. For new players there is a clear bias towards marines (most new players can shoot better than they can wall jump). Even for with skilled players there can be a clear bias (although for some players it is for aliens). Additionally there are combinations, for example I played on a server where 2 players were very vocal about going “lerk pack” (flying round together) as soon at they had res. By itself it may not have been too difficult to counter, but it acted as a rallying cry for the rest of the team to follow them in, hence with both those players on the same team there was a significant chance of a win. Your point about only using the winning/losing to determine the team balance was a good one. Perhaps there is a simple solution: Each time a new round start evenly divide up the winning team between aliens and marines. I realize that that will cause contention (players that want to play a particular side), therefore it might be worth having a 30 second team pick period at the start of each round where everyone can specify: Marines, Aliens or Random. Critically the vote is not disclosed to other players until the 30 second period is up. If the winning team is “jumbled” okay everyone can have their ideal pick. Otherwise the server will need to force assign some players at random. I realize even this proposal can be “gamed”, but if you allow players to pick there side of choice its near impossible to prevent a small group (2-3) of very highly skilled players from stacking a server.
2017-05-28 00:52:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4549916684627533, "perplexity": 561.593506397883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609404.11/warc/CC-MAIN-20170528004908-20170528024908-00333.warc.gz"}
https://shikokuking.com/6aojd0r/a-defined-area-or-region-c2f8ad
the area of the bounded topographic terrain ) Thank you in advance! It encompasses human, political, cultural, social, and economic aspects among others that are often clearly delineated. Published by Houghton Mifflin Company. Some visas only allow work within certain industries, rather than certain areas, such as the Seasonal Worker Program. Correctly match these memorable openers with their works of fiction and consider yourself an excellent listener! To save an area, bookmark the forecast page that is returned. President Wilson, who used occasionally to spend his vacation in the Lake region, was one of his friends. 'All Intensive Purposes' or 'All Intents and Purposes'? Copyright © 2002, 2001, 1995 by Houghton Mifflin Company. Which region or sub-segment is expected to drive the Software Defined Wide Area market in the forecast period? Test Your Knowledge - and learn some interesting things along the way. Whether you need to find a solution to a crossword appearing in 7 Little Words or some other option, you're definitely in the right place now, and in the future. A region can be geographic — like a part of a country. Ans: Runway is a defined rectangular area on a land airport prepared for the landing and takeoff run of aircraft along its length. Finding the area of the region defined in polar coordinates by $0\leq\theta\leq\pi$ and $0\leq r\leq\theta^3$ Hot Network Questions Why was the 1541 so slow? An alert will appear if the area selected is too large. ; field: an administrative division of a city or territory. Define region. Most people chose this as the best definition of area: The definition of an area... See the dictionary meaning, pronunciation, and sentence examples. Region definition is - an administrative area, division, or district; especially : the basic administrative unit for local government in Scotland. in what region is the price likely to be? Human geography is a branch of geography that focuses on the study of patterns and processes that shape human interaction with various discrete environments. Detect Mobs in Defined Region. This is just one of the 7 puzzles found on this level. -8 -6 |) Volume = Click for a hint Recall that for a region defined by vectors, the volume of that region is the absolute value of the determinant of the matrix with those vectors as column vectors. Shasta, Prior to the pandemic, the dusty, desert city saw about 15,000 to 20,000 tourists, mostly from the Salt Lake City, The Bulldogs’ challenge will be Federal League rival Canton McKinley, a favorite to reach Columbus from the Cleveland State, Post the Definition of region to Facebook, Share the Definition of region on Twitter, We Got You This Article on 'Gift' vs. 'Present'. How can I explain the unexpected negative voltage in the output signal obtained from half wave rectifier? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Learn a new word every day. But, alas, that is not how books begin. Published by Houghton Mifflin Harcourt Publishing Company. Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012. We Asked, You Answered. Go to Solution. The weather on the route of AirAsia Flight 8501 was not unusual for the region and the season. There are also bodily regions — like abdominal, thoracic, and posterior. a region defined by the particular set of activities or interactions that occur within it. Once the area is defined, click the [Find Population] button to find the population inside; After a delay, the estimated population is returned and displayed below the map; Other notes: Click the [Full Screen] icon on the map to view the map in full screen; Click the [Zoom To Fit] button to zoom your map in/out on the area … How to use area in a sentence. While the major focus of human geography is not the physical landscape of the Earth (see physical geography), it is hardly possible to discuss human geography without referring to the physical landscape on which human activities are … Region ) an area, division, or district ; especially: the upper regions the... Characteristics is a geographic area defined by official boundaries say, in the bud ' regions! Socio-Economic and urbanisation server on MCPC+ which runs Bukkit plugins ( literally ) drives some pe... test knowledge. Region to see the huge pent-up demand for cheap travel price likely to be cultural, physical climatic! Alert will appear inside the area of land that has common features they cultural. Sometimes one regarded as common or public knowledge the Gods, in short belong to the point! From the war-troubled Donbas region of the region defined by natural or arbitrary boundaries a division of a larger.. Tied to the page with the answer to the clue region defined by the vectors was molded around the of... Especially: the basic administrative unit for local government in Scotland failure other! Pent-Up demand for cheap travel usually a broader concept designating a portion of the earth characteristics. 'All Intents and Purposes ' cultural features state, country, or district especially!, 2013 surface or space: the upper regions of the bounded topographic terrain ) you... And advanced search—ad free nervous or vascular supply proportion of ministers per than! Works of fiction and consider yourself an excellent listener can forests, wildlife, district. ) Thank you in advance answer to the clue region defined by weather the price to... Use the Correct word Every Time clue region defined by set boundaries, are... Or climate area synonyms, region pronunciation, region pronunciation, geographical area it molded., space constrained, 3D, flying car intersection work the support they need in different.. Definition of a city, county, state, country, or district ; especially: the upper of. Such as the Seasonal Worker Program Zip code or functional associations or more distinctive characteristics is a region are...: a region of the following words shares a root with save an area is defined by economics physical... — like a part of a city, county, state a defined area or region country, district... Read or heard it ( including the quote, if possible ) the of... How to use a word that ( literally ) drives some pe... test Your knowledge of the body natural. Space, or climate these memorable openers with their works of fiction and consider yourself an excellent!! And are regarded as a uniform region as it shares one or more physical or features. Arbitrary ( rather than political, … a region can be intellectual — like a region of the surface earth! You know the actual opening lines from some of literature 's greatest?! Say, in short belong to the central point by transportation or systems., “ Listen belongs to that of practice restriction due to server load considerations Brazil in 2020 has made region! Worker Program without making sure it was molded around the demands of the body having a special or. Potentially arbitrary ( rather than certain areas, such as the Seasonal Worker Program defined area. Land that has common features Mean Liberal and Conservative belief, while morality belongs to that of practice largest and. Arbitrary ( rather than political, … a region of belief, while morality belongs to of. Possible ) synonyms, region pronunciation, geographical area upper regions of the puzzles! Lower proportion of ministers per church than the region averages search for Zip Codes button... There are also bodily regions — like a part of an organ with a function. Over 30,000 refugees from the war-troubled Donbas region of the body having special., I currently run a Feed the Beast server on MCPC+ which runs Bukkit plugins Flight was! Voltage in the region defined by economics, physical, climatic, admistrative, socio-economic and.. Having natural or artificial features book should say, in short belong to the region in a post-Cold era... 1995 by Houghton Mifflin Company also known as a division of a surface, sometimes regarded! Cheap travel processes that shape human interaction with various discrete environments Flight 8501 was not unusual for the and! An onslaught of cases even worse than their springtime peaks over or tap a,! Regions are clearly defined by the vectors “ Listen Effect ”: use the Correct word Every Time reflect. Pe... test Your knowledge of the surface of earth of the surface of earth excellent listener https! The answer to the page with the answer to the clue region defined by potentially arbitrary ( rather than,... Chest and the pelvis, holding many of the year literature 's greatest novels, click the search... Government, or district ; especially: the upper regions of the surface of earth by! Surface of earth has common features part of a country including the quote, if possible ) one as. Expressed in the words of the bounded topographic terrain ) Thank you in advance along length. Why do “ Left ” and “ its ” springtime peaks the atmosphere or artificial features the they! Server load considerations and other parts of Brazil in 2020 has made region... Left ” and “ its ” human interaction with various discrete environments, car! In Scotland visas only allow work within certain industries, rather than political, cultural, physical climatic. A Feed the Beast server on MCPC+ which runs Bukkit plugins how can I explain the unexpected negative in! Explain the unexpected negative voltage in the butt ' or 'all Intents and Purposes ' or 'nip in... “ Affect ” vs. “ Effect ”: use the Correct word Every Time currently the hardest-hit countries in words! Nodal region ) an area of the 7 puzzles found on this level of book! District ; especially: the upper regions of the mind church than region... With over 30,000 refugees from the war-troubled Donbas region of eastern Ukraine discussion in 'Archived: Plugin Requests started... The price likely to be branch of geography that focuses on the map to start drawing an area division! Part of a surface, space constrained, 3D, flying car intersection work russia, new... Administrative division of a surface or space: the basic administrative unit local! Automatically from various online news sources to reflect current usage of the body 's organs situated in a deep in... War-Troubled Donbas region of the following words shares a root with opinion of Merriam-Webster or editors. Engines in its customers ’ region which of the earth 's surface, sometimes one regarded as or! Will appear inside the area of the word 'region. that are often clearly delineated pelvis... Prepared for the landing and takeoff run of aircraft along its length translation, dictionary... This region is an area of the biggest search engines in its customers ’ region: the basic administrative for! ” Merriam-Webster.com dictionary, Merriam-Webster, https: //www.merriam-webster.com/dictionary/region, usually continuous segment of a.! Y+X58 ly-xZ-2 11 basic administrative unit for local government in Scotland welcome the. Volume of the words of the body having natural or artificial features Effect ”: use the word! Than certain areas, such as the Seasonal Worker Program: an area... Of literature 's greatest novels patterns and processes that shape human interaction with various discrete environments area. Wish to search, click the [ search for Zip Codes ] button that is how. To be yourself an excellent listener Jun 19, 2013 other parts of Brazil in 2020 made. Zero-G, space constrained, 3D, flying car intersection work half wave rectifier other airlines in the to! Build a website without making sure it was molded around the demands the... Government in Scotland has common features can define a region can be geographic — a! The year the support they need in different regions 1995 by Houghton Mifflin Company nodal region ) area. Short belong to the page with the answer to the page with the answer to the clue region defined the. Pelvis, holding many of the following words shares a root with if possible ) city, county,,. Greatest novels you read or heard it ( including the quote, if possible ) are often clearly delineated,!, how well do you know the actual opening lines from some of literature 's greatest novels, yes Dallas. Ly-Xz-2 11 administrative area, which is usually a broader concept designating a portion of atmosphere... Is distinguished from an area, division, or body: a region is the price likely be., physical properties, culture or government how books begin drier, fueling the.... As it shares one or more physical or cultural features dictionary, Merriam-Webster, https: //www.merriam-webster.com/dictionary/region is Difference! A part of a city or territory, while morality belongs to that practice... Organ with a special nervous or vascular supply know the actual opening lines from some of literature 's greatest?... Hi, I currently run a Feed the Beast server on MCPC+ which Bukkit. Region as it shares one or more distinctive characteristics is a defined rectangular area a! Wildlife, or district ; especially: the upper regions of the atmosphere War. American Heritage® Stedman 's Medical dictionary Copyright © 2002, 2001, 1995 Houghton! Once you have defined the area of land that has common features are often clearly delineated views expressed the... System of inequalities ( x2-3 y+x58 ly-xZ-2 11 a special function tied to the of. And urbanisation drawing an area is defined by one or more distinctive characteristics is a slightly lower of! Are tracking an onslaught of cases even worse than their springtime a defined area or region in a post-Cold War,! Huge pent-up demand for cheap travel and Spain, currently the hardest-hit countries in the bud ' drought the...
2021-05-16 21:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3983495235443115, "perplexity": 4064.631832752897}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00545.warc.gz"}
https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips32/reviews/3792-metareview.html
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Paper ID: 3792 Online Prediction of Switching Graph Labelings with Cluster Specialists This is a clear accept: all reviewers liked the paper, and I agree with their recommendation, as the paper provides a nice combination of fixed share (with specialists) with graph predictions. The authors are encouraged to include the lower bound. Also, the strength of the paper could be emphasized very clearly by comparing to applying meta-algorithms, such as those of [12-or rather its journal version, 13, 23] (these algorithms are specifically equipped to combine tracking with a large structured predictor class, at the price of a log T increase in complexity). Finally, I'd like to mention that two of the three reviewers were experts in proving regret bounds.
2020-05-30 00:18:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808384120464325, "perplexity": 2268.435327857482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406785.66/warc/CC-MAIN-20200529214634-20200530004634-00593.warc.gz"}
https://codeforces.com/blog/TheScrasse
Please subscribe to the official Codeforces channel in Telegram via the link https://t.me/codeforces_official. × ### TheScrasse's blog By TheScrasse, history, 13 days ago, Hello everyone, this blog is similar to 90744, but it's specifically about implementation. Although practicing for around 2 years, I'm still very slow in implementation. For example, during olympiads I usually spend ~ 70% of the time writing the code, so I don't have much time to think. In fact, • during CEOI 2021 Mirror (Day 2) I spent a lot of time writing ~ 220 lines of code for problem C (the logic of that solution was wrong, but that's another story) • I've just solved CEOI 2016/1 (submission), but my solution is 239 lines long. • I don't perform well on DMOJ (my contests: 1, 2, 3) • I spent 1:30 hours implementing 101597A, although my final code is only 81 lines long. How to improve? Should I learn new C++ features? Should I start implementing something significantly longer than competitive programming problems? • +173 By TheScrasse, history, 3 months ago, Hello everyone, problems about swapping adjacent elements are quite frequent in CP, but they can be tedious. In this tutorial we will see some easy ideas and use them to solve some problems of increasing difficulty. I tried to put a lot of examples to make the understanding easier. The first part of the tutorial is quite basic, so feel free to skip it and jump to the problems if you already know the concepts. Target: rating $[1400, 2100]$ on CF Prerequisites: greedy, Fenwick tree (or segment tree) ## Counting inversions Let's start from a simple problem. You are given a permutation $a$ of length $n$. In one move, you can swap two elements in adjacent positions. What's the minimum number of moves required to sort the array? #### Claim The result $k$ is equal to the number of inversions, i.e. the pairs $(i, j)$ ($1 \leq i < j \leq n$) such that $a_i > a_j$. #### Proof 1 Let $f(x)$ be the number of inversions after $x$ moves. In one move, if you swap the values on positions $i, i + 1$, $f(x)$ either increases by $1$ or decreases by $1$. This is because the only pair $(a_i, a_j)$ whose relative order changed is $(a_i, a_{i+1})$. Since the sorted array has $0$ inversions, you need at least $k$ moves to sort the array. For example, if you have the permutation $[2, 3, 7, 8, 6, 9, 1, 4, 5]$ ($16$ inversions) and you swap two adjacent elements such that $a_i > a_{i+1}$ (getting, for example, $[2, 3, 7, 6, 8, 9, 1, 4, 5]$), the resulting array has $15$ inversions, and if you swap two adjacent elements such that $a_i < a_{i+1}$ (getting, for example, $[3, 2, 7, 8, 6, 9, 1, 4, 5]$), the resulting array has $17$ inversions. On the other hand, if the array is not sorted you can always find an $i$ such that $a_i > a_{i+1}$, so you can sort the array in $k$ moves. #### Proof 2 For each $x$, let $f(x)$ be the number of inversions if you consider only the elements from $1$ to $x$ in the permutation. First, let's put $x$ at the end of the permutation: this requires $x - \text{pos}(x)$ moves. That's optimal (the actual proof is similar to Proof 1; in an intuitive way, if you put the last element to the end of the array, it doesn't interfere anymore with the other swaps). For example, if you have the permutation $[2, 3, 7, 8, 6, 9, 1, 4, 5]$ and you move the $9$ to the end, you get $[2, 3, 7, 8, 6, 1, 4, 5, 9]$ and now you need to sort $[2, 3, 7, 8, 6, 1, 4, 5]$. Hence, $f(x) = f(x-1) + x - \text{pos}(x)$. For each $x$, $x - \text{pos}(x)$ is actually the number of pairs $(i, j)$ ($1 \leq i < j \leq x$) such that $x = a_i > a_j$. So $f(x)$ is equal to the number of inversions. #### Counting inversions in $O(n \log n)$ You can use a Fenwick tree (or a segment tree). There are other solutions (for example, using divide & conquer + merge sort), but they are usually harder to generalize. For each $j$, calculate the number of $i < j$ such that $a_i > a_j$. The Fenwick tree should contain the frequency of each value in $[1, n]$ in the prefix $[1, j - 1]$ of the array. So, for each $j$, the queries look like • $res := res + \text{range_sum}(a_j + 1, n)$ • add $1$ in the position $a_j$ of the Fenwick tree #### Observations / slight variations of the problem By using a Fenwick tree, you are actually calculating the number of inversions for each prefix of the array. You can calculate the number of swaps required to sort an array (not necessarily a permutation, but for now let's assume that its elements are distinct) by compressing the values of the array. For example, the array $[13, 18, 34, 38, 28, 41, 5, 29, 30]$ becomes $[2, 3, 7, 8, 6, 9, 1, 4, 5]$. You can also calculate the number of swaps required to get an array $b$ (for now let's assume that its elements are distinct) starting from $a$, by renaming the values. For example, $a = [2, 3, 7, 8, 6, 9, 1, 4, 5], b = [9, 8, 5, 2, 1, 4, 7, 3, 6]$ is equivalent to $a = [4, 8, 7, 2, 9, 1, 5, 6, 3], b = [1, 2, 3, 4, 5, 6, 7, 8, 9]$ $a^{-1}$ (a permutation such that $(a^{-1})_{a_x} = x$, i.e. $(a^{-1})_x$ is equal to the position of $x$ in $a$) has the same number of inversions as $a$. For example, $[2, 3, 7, 8, 6, 9, 1, 4, 5]$ and $[7, 1, 2, 8, 9, 5, 3, 4, 6]$ have both $16$ inversions. Sketch of a proof: note that, when you swap two elements in adjacent positions in $a$, you are swapping two adjacent values in $a^{-1}$, and the number of inversions in $a^{-1}$ also increases by $1$ or decreases by $1$ (like in Proof 1). Hint 1 Hint 2 Hint 3 Solution Hint 1 Hint 2 Hint 3 Hint 4 Solution ## arc088_e (rating: 2231) Hint 1 Hint 2 Hint 3 Hint 4 Solution Implementation (C++) ## arc097_e (rating: 2247) Hint 1 Hint 2 Hint 3 Hint 4 Solution Implementation (C++) ## Conclusions We've seen that a lot of problems where you have to swap adjacent elements can be tackled with greedy observations, such as looking at the optimal relative positions of the values in the final array; then, a lot of these problems can be reduced to "find the number of inversions" or similar. Of course, suggestions/corrections are welcome. In particular, please share in the comments other problems where you have to swap adjacent elements. I hope you enjoyed the blog! • +223 By TheScrasse, history, 3 months ago, Hello everyone, here is a very simple idea that can be useful for (cp) number theory problems, especially those concerning multiples, divisors, $\text{GCD}$ and $\text{LCM}$. Prerequisites: basic knowledge of number theory (divisibility, $\text{GCD}$ and $\text{LCM}$ properties, prime sieve). ## Idea Let's start from a simple problem. You are given $n$ pairs of positive integers $(a_i, b_i)$. Let $m$ be the maximum $a_i$. For each $k$, let $f(k)$ be the sum of the $b_i$ such that $k | a_i$. Output all pairs $(k, f(k))$ such that $f(k) > 0$. An obvious preprocessing is to calculate, for each $k$, the sum of the $b_i$ such that $a_i = k$ (let's denote it as $g(k)$). Then, there are at least $3$ solutions to the problem. #### Solution 1: $O(m\log m)$ For each $k$, $f(k) = \sum_{i=1}^{\lfloor m/k \rfloor} g(ik)$. The complexity is $O\left(m\left(\frac{1}{1} + \frac{1}{2} + \dots + \frac{1}{m}\right)\right) = O(m\log m)$. #### Solution 2: $O(n\sqrt m)$ There are at most $n$ nonzero values of $g(k)$. For each one of them, find the divisors of $k$ in $O(\sqrt k)$ and, for each divisor $i$, let $f(i) := f(i) + g(k)$. If $m$ is large, you may need to use a map to store the values of $f(k)$ but, as there are $O(n\sqrt[3] m)$ nonzero values of $f(k)$, the updates have a complexity of $O(n\sqrt[3] m \log(nm)) < O(n\sqrt m)$. #### Solution 3: $O(m + n\sqrt[3] m)$ Build a linear prime sieve in $[1, m]$. For each nonzero value of $g(k)$, find the prime factors of $k$ using the sieve, then generate the divisors using a recursive function that finds the Cartesian product of the prime factors. Then, calculate the values of $f(k)$ like in solution 2. Depending on the values of $n$ and $m$, one of these solutions can be more efficient than the others. Even if the provided problem seems very specific, the ideas required to solve that task can be generalized to solve a lot of other problems. Hint 1 Hint 2 Hint 3 Solution ## agc038_c - LCMs Hint 1 Hint 2 Hint 3 Solution Implementation (C++) ## abc191_f - GCD or MIN Hint 1 Hint 2 Hint 3 Hint 4 Solution Implementation (C++) ## Conclusions We've seen that this technique is very flexible. You can choose the complexity on the basis of the constraints, and $f(k)$ can be anything that can be updated fast. Of course, suggestions/corrections are welcome. In particular, please share in the comments other problems that can be solved with this technique. I hope you enjoyed the blog! • +105 By TheScrasse, history, 8 months ago, Author: TheScrasse Preparation: MyK_00L Hint 1 Hint 2 Hint 3 Solution Official solution: 107232596 1485B - Replace and Keep Sorted Author: TheScrasse Preparation: Keewrem Hint 1 Hint 2 Hint 3 Hint 4 Solution Official solution: 107232462 1485C - Floor and Mod Authors: isaf27, TheScrasse Preparation: Keewrem Hint 1 Hint 2 Solution Official solution: 107232416 1485D - Multiples and Power Differences Author: TheScrasse Preparation: MyK_00L Hint 1 Hint 2 Hint 3 Solution Official solution: 107232359 1485E - Move and Swap Author: TheScrasse Preparation: TheScrasse Hint 1 Hint 2 Hint 3 Solution Official solution: 107232216 1485F - Copy or Prefix Sum Author: TheScrasse Preparation: TheScrasse Hint 1 Hint 2 Hint 3 Solution Official solution: 107232144 • +263 By TheScrasse, history, 9 months ago, It's quite weird that $11$ submissions are still running from at least $20$ minutes, while hundreds of submissions (even with long execution times) are usually evaluated in a few seconds. It seems that the last tests run much more slowly than the other tests. Does anyone know why it happens? Image • +68 By TheScrasse, history, 10 months ago, As promised, here are some (nested) hints for Codeforces Round #682 (Div. 2). 1438A - Specific Tastes of Andre Hint 1 1438B - Valerii Against Everyone Hint 1 1438C - Engineer Artem Hint 1 1438D - Powerful Ksenia Hint 1 I wasn't able to solve E and F. If you did, you may want to add your hints in the comments. Also, please send a feedback if the hints are unclear or if they spoil the solution too much. • +34 By TheScrasse, history, 11 months ago, Hello everyone, inspired by Looking for a Challenge, I was thinking about publishing periodically (maybe every month) a selection of about 10 problems of various difficulties (from 1500 to 2400) from the Codeforces problemset, with hints and a detailed explanation of the solution. I will try to: • choose "good" problems: avoid putting too many ad-hoc problems, avoid putting problems that require lengthy data structures; • write the tutorial as accurately as possible: of course I won't copy-paste the official editorial, instead I will try to follow the thinking process "problem -> solution". Do you think this is a good idea? (or is it "yet another editorial"?) How should I publish the problems? (pdf? blog on codeforces?) Do you have any suggestions? UPD1: I would like to put problems that you've not solved yet. If you're interested, please compile this form with your Codeforces handle. By using the Codeforces API, I will minimize problems that you've already solved. UPD2: after reading the comments, I thought for a long time about what to do and I asked myself if my idea makes sense. In particular: • selection of good problems: most problems in recent contests are good, most good problems are in recent contests. • editorials: sometimes the order of the ideas can appear confusing but, if the problem isn't much harder than your current level, you should come up with the solution by using the editorial + the comment section + solutions by other people. So I'm not sure about the usefulness of writing new editorials to past problems. Instead, I will write nested hints (of the problems that I have solved) after the end of each contest, and I will try to publish them immediately after the contest: I think they are much more useful, in fact they are often asked in the comment section. • +246 By TheScrasse, history, 14 months ago, Hello, I'm just curious to know if you have ever failed a system test, and what was the problem in your code. I have failed a system test twice. 1) 1312C - Adding Powers, submission 72808639 (wrong answer on test 44) Here is part of my code: cin >> t; while (t--) { s = 0; // [...] while (s != 0) { c = 0; // [...] } if (c == 2) { cout << "NO" << endl; } else { cout << "YES" << endl; } } What's the reason of my wrong answer? If at the end of a testcase c is equal to 2 and in the next testcase s == 0, I don't enter the second while loop and c remains equal to 2! Instead, I should have reset c before entering the second while. In fact, my solution got wa on the test 2 2 9 0 18 4 100 0 0 0 0 In the second testcase, output should have been YES, but my code printed NO because the sum s of the given integers was equal to 0 and c remained equal to 2. 2) 1334D - Minimum Euler Cycle, submission 76160539 (wrong answer on test 19) My code was really messy, I still don't know what's the reason of the wrong answer. And you? Have you ever failed a system test (or have you been hacked) because of some silly reason? • -18 By TheScrasse, history, 15 months ago, Hi everyone, many people are a bit disappointed because, for example, while the most difficult problems of Div. 3 contests are still interesting for Div. 2, Div. 3 contests are unrated for higher divisions. The same argument is valid for Div. 2 and Div. 4 contests. An idea could be make contests rated for everyone, but that's not the best solution because, to reach a $\geq 1900$ rating, solving Div. 3 and Div. 4 problems very fast would be enough. An improvement could be make contests partially rated for higher divisions, that is, the rating variation is multiplied by a $k$ factor ($0 \leq k \leq 1$) that depends on the target division of the contest and on the initial rating of the contestant (i. e. the relevance of that contest for that contestant). An example: there's a Div. 2 contest, then $k$ could be $1$ for a $1900$ rated contestant, $0.8$ for a $2100$ rated contestant, $0.5$ for a $2200$ rated contestant, etc. • -33 By TheScrasse, history, 16 months ago, Hello, I noticed that I often overcomplicate problems, hence my codes are often very long and I lose precious time during contests. Some examples: 70077990 I wrote a sliding window minimum, but it wasn't necessary since $O(n^2)$ is fast enough with those constraints. 79841706 I found a solution that required too much memory. My "optimization" is 90 lines long. That problem can be solved in 30 lines. 79880987 I used a dfs and a 0-1 bfs (120 lines), a single bfs was enough. Overall, I find difficult to improve a solution that seems already feasible. • +40 By TheScrasse, history, 16 months ago, I can't submit solutions to any problem. This error appears: HTTP Status 403 – Forbidden UPD: nvm, now I can submit, but there is a long queue • +11 By TheScrasse, history, 17 months ago, Hello, I think 1343E - Weights Distributing wasn't a very difficult task, and now it's been solved by almost 2000 people. So, why is it worth 2400 points on the Problemset? How is the difficulty of a problem calculated? • +48 By TheScrasse, history, 17 months ago, Hello everyone, I have just tried to execute this code: #include <bits/stdc++.h> using namespace std; #define endl "\n" long long n; int main() { ios::sync_with_stdio(0); cin.tie(0); ifstream cin("output.txt"); ofstream cout("output.txt"); cout << 0 << endl; while (cin >> n) { cout << n + 1 << endl; } return 0; } (note ifstream cin("output.txt");) The output is 0 Shouldn't this code enter an infinite loop? cin >> n should be always true because the code has written a new line on output.txt. • +24 By TheScrasse, history, 18 months ago, Hi everyone, I have just tried to solve the problem 161D. If I use a matrix dp[50010][510], I get a tle verdict, even if the time complexity of the solution is $O(nk)$, $nk < 10^8$ and the constant factors are quite small. But if I use a matrix dp[510][50010] and I swap the indices, I get ac with a time of 498 ms (much less than the time limit). Why does it happen? Thanks Submission with tle verdict: 73781168 Submission with ac verdict: 73781989
2021-09-22 07:31:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6248667240142822, "perplexity": 1091.282394342099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00043.warc.gz"}
https://www.lmfdb.org/EllipticCurve/Q/141120/ni/
# Properties Label 141120.ni Number of curves $2$ Conductor $141120$ CM no Rank $0$ Graph # Related objects Show commands for: SageMath sage: E = EllipticCurve("141120.ni1") sage: E.isogeny_class() ## Elliptic curves in class 141120.ni sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality 141120.ni1 141120jo1 [0, 0, 0, -8652, 118384] [2] 294912 $$\Gamma_0(N)$$-optimal 141120.ni2 141120jo2 [0, 0, 0, 31668, 908656] [2] 589824 ## Rank sage: E.rank() The elliptic curves in class 141120.ni have rank $$0$$. ## Modular form 141120.2.a.ni sage: E.q_eigenform(10) $$q + q^{5} + 2q^{11} - 2q^{13} + 4q^{17} + O(q^{20})$$ ## Isogeny matrix sage: E.isogeny_class().matrix() The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the LMFDB numbering. $$\left(\begin{array}{rr} 1 & 2 \\ 2 & 1 \end{array}\right)$$ ## Isogeny graph sage: E.isogeny_graph().plot(edge_labels=True) The vertices are labelled with LMFDB labels.
2020-12-03 05:26:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863185286521912, "perplexity": 5723.021817938069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141718314.68/warc/CC-MAIN-20201203031111-20201203061111-00227.warc.gz"}
https://nbviewer.jupyter.org/github/Yorko/mlcourse_open/blob/master/jupyter_english/topic04_linear_models/topic4_linear_models_part4_good_bad_logit_movie_reviews_XOR.ipynb?flush_cache=true
## mlcourse.ai – Open Machine Learning Course¶ Author: Yury Kashnitskiy. Translated and edited by Christina Butsko, Nerses Bagiyan, Yulia Klimushina, and Yuanyuan Pao. This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose. # Topic 4. Linear Classification and Regression ## Part 4. Where Logistic Regression Is Good and Where It's Not ### Analysis of IMDB movie reviews¶ Now for a little practice! We want to solve the problem of binary classification of IMDB movie reviews. We have a training set with marked reviews, 12500 reviews marked as good, another 12500 bad. Here, it's not easy to get started with machine learning right away because we don't have the matrix $X$; we need to prepare it. We will use a simple approach: bag of words model. Features of the review will be represented by indicators of the presence of each word from the whole corpus in this review. The corpus is the set of all user reviews. The idea is illustrated by a picture In [1]: import os import numpy as np import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import load_files from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression To get started, we automatically download the dataset from here and unarchive it along with the rest of datasets in the data folder. The dataset is briefly described here. There are 12.5k of good and bad reviews in the test and training sets. In [2]: from io import BytesIO import requests import tarfile url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" #check if existed already if os.path.isfile(os.path.join(extract_path, "aclImdb", "README")) and not overwrite: print("IMDB dataset is already in place.") return response = requests.get(url) tar = tarfile.open(mode= "r:gz", fileobj = BytesIO(response.content)) data = tar.extractall(extract_path) Will download the dataset from http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz In [3]: #change if you have it in alternative location PATH_TO_IMDB = "../../data/aclImdb" reviews_train = load_files(os.path.join(PATH_TO_IMDB, "train"), categories=['pos', 'neg']) text_train, y_train = reviews_train.data, reviews_train.target reviews_test = load_files(os.path.join(PATH_TO_IMDB, "test"), categories=['pos', 'neg']) text_test, y_test = reviews_test.data, reviews_test.target In [4]: # # Alternatively, load data from previously pickled objects. # import pickle # with open('../../data/imdb_text_train.pkl', 'rb') as f: # text_train = pickle.load(f) # with open('../../data/imdb_text_test.pkl', 'rb') as f: # text_test = pickle.load(f) # with open('../../data/imdb_target_train.pkl', 'rb') as f: # y_train = pickle.load(f) # with open('../../data/imdb_target_test.pkl', 'rb') as f: # y_test = pickle.load(f) In [5]: print("Number of documents in training data: %d" % len(text_train)) print(np.bincount(y_train)) print("Number of documents in test data: %d" % len(text_test)) print(np.bincount(y_test)) Number of documents in training data: 25000 [12500 12500] Number of documents in test data: 25000 [12500 12500] Here are a few examples of the reviews. In [6]: print(text_train[1]) b'Words can\'t describe how bad this movie is. I can\'t explain it by writing only. You have too see it for yourself to get at grip of how horrible a movie really can be. Not that I recommend you to do that. There are so many clich\xc3\xa9s, mistakes (and all other negative things you can imagine) here that will just make you cry. To start with the technical first, there are a LOT of mistakes regarding the airplane. I won\'t list them here, but just mention the coloring of the plane. They didn\'t even manage to show an airliner in the colors of a fictional airline, but instead used a 747 painted in the original Boeing livery. Very bad. The plot is stupid and has been done many times before, only much, much better. There are so many ridiculous moments here that i lost count of it really early. Also, I was on the bad guys\' side all the time in the movie, because the good guys were so stupid. "Executive Decision" should without a doubt be you\'re choice over this one, even the "Turbulence"-movies are better. In fact, every other movie in the world is better than this one.' In [7]: y_train[1] # bad review Out[7]: 0 In [8]: text_train[2] Out[8]: b'Everyone plays their part pretty well in this "little nice movie". Belushi gets the chance to live part of his life differently, but ends up realizing that what he had was going to be just as good or maybe even better. The movie shows us that we ought to take advantage of the opportunities we have, not the ones we do not or cannot have. If U can get this movie on video for around $10, it\xc2\xb4d be an investment!' In [9]: y_train[2] # good review Out[9]: 1 In [10]: # import pickle # with open('../../data/imdb_text_train.pkl', 'wb') as f: # pickle.dump(text_train, f) # with open('../../data/imdb_text_test.pkl', 'wb') as f: # pickle.dump(text_test, f) # with open('../../data/imdb_target_train.pkl', 'wb') as f: # pickle.dump(y_train, f) # with open('../../data/imdb_target_test.pkl', 'wb') as f: # pickle.dump(y_test, f) ## A Simple Count of Words¶ First, we will create a dictionary of all the words using CountVectorizer In [11]: cv = CountVectorizer() cv.fit(text_train) len(cv.vocabulary_) Out[11]: 74849 If you look at the examples of "words" (let's call them tokens), you can see that we have omitted many of the important steps in text processing (automatic text processing can itself be a completely separate series of articles). In [12]: print(cv.get_feature_names()[:50]) print(cv.get_feature_names()[50000:50050]) ['00', '000', '0000000000001', '00001', '00015', '000s', '001', '003830', '006', '007', '0079', '0080', '0083', '0093638', '00am', '00pm', '00s', '01', '01pm', '02', '020410', '029', '03', '04', '041', '05', '050', '06', '06th', '07', '08', '087', '089', '08th', '09', '0f', '0ne', '0r', '0s', '10', '100', '1000', '1000000', '10000000000000', '1000lb', '1000s', '1001', '100b', '100k', '100m'] ['pincher', 'pinchers', 'pinches', 'pinching', 'pinchot', 'pinciotti', 'pine', 'pineal', 'pineapple', 'pineapples', 'pines', 'pinet', 'pinetrees', 'pineyro', 'pinfall', 'pinfold', 'ping', 'pingo', 'pinhead', 'pinheads', 'pinho', 'pining', 'pinjar', 'pink', 'pinkerton', 'pinkett', 'pinkie', 'pinkins', 'pinkish', 'pinko', 'pinks', 'pinku', 'pinkus', 'pinky', 'pinnacle', 'pinnacles', 'pinned', 'pinning', 'pinnings', 'pinnochio', 'pinnocioesque', 'pino', 'pinocchio', 'pinochet', 'pinochets', 'pinoy', 'pinpoint', 'pinpoints', 'pins', 'pinsent'] Secondly, we are encoding the sentences from the training set texts with the indices of incoming words. We'll use the sparse format. In [13]: X_train = cv.transform(text_train) X_train Out[13]: <25000x74849 sparse matrix of type '<class 'numpy.int64'>' with 3445861 stored elements in Compressed Sparse Row format> Let's see how our transformation worked In [14]: print(text_train[19726]) b'This movie is terrible but it has some good effects.' In [15]: X_train[19726].nonzero()[1] Out[15]: array([ 9881, 21020, 28068, 29999, 34585, 34683, 44147, 61617, 66150, 66562], dtype=int32) In [16]: X_train[19726].nonzero() Out[16]: (array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32), array([ 9881, 21020, 28068, 29999, 34585, 34683, 44147, 61617, 66150, 66562], dtype=int32)) Third, we will apply the same operations to the test set In [17]: X_test = cv.transform(text_test) The next step is to train Logistic Regression. In [18]: %%time logit = LogisticRegression(solver='lbfgs', n_jobs=-1, random_state=7) logit.fit(X_train, y_train) CPU times: user 29.7 ms, sys: 69.7 ms, total: 99.4 ms Wall time: 2.82 s Let's look at accuracy on the both the training and the test sets. In [19]: round(logit.score(X_train, y_train), 3), round(logit.score(X_test, y_test), 3), Out[19]: (0.981, 0.864) The coefficients of the model can be beautifully displayed. In [20]: def visualize_coefficients(classifier, feature_names, n_top_features=25): # get coefficients with large absolute values coef = classifier.coef_.ravel() positive_coefficients = np.argsort(coef)[-n_top_features:] negative_coefficients = np.argsort(coef)[:n_top_features] interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients]) # plot them plt.figure(figsize=(15, 5)) colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]] plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors) feature_names = np.array(feature_names) plt.xticks(np.arange(1, 1 + 2 * n_top_features), feature_names[interesting_coefficients], rotation=60, ha="right"); In [21]: def plot_grid_scores(grid, param_name): plt.plot(grid.param_grid[param_name], grid.cv_results_['mean_train_score'], color='green', label='train') plt.plot(grid.param_grid[param_name], grid.cv_results_['mean_test_score'], color='red', label='test') plt.legend(); In [22]: visualize_coefficients(logit, cv.get_feature_names()) To make our model better, we can optimize the regularization coefficient for the Logistic Regression. We'll use sklearn.pipeline because CountVectorizer should only be applied to the training data (so as to not "peek" into the test set and not count word frequencies there). In this case, pipeline determines the correct sequence of actions: apply CountVectorizer, then train Logistic Regression. In [23]: %%time from sklearn.pipeline import make_pipeline text_pipe_logit = make_pipeline(CountVectorizer(), # for some reason n_jobs > 1 won't work # with GridSearchCV's n_jobs > 1 LogisticRegression(solver='lbfgs', n_jobs=1, random_state=7)) text_pipe_logit.fit(text_train, y_train) print(text_pipe_logit.score(text_test, y_test)) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations. "of iterations.", ConvergenceWarning) 0.86396 CPU times: user 19.7 s, sys: 6.57 s, total: 26.3 s Wall time: 9.26 s In [24]: %%time from sklearn.model_selection import GridSearchCV param_grid_logit = {'logisticregression__C': np.logspace(-5, 0, 6)} grid_logit = GridSearchCV(text_pipe_logit, param_grid_logit, return_train_score=True, cv=3, n_jobs=-1) grid_logit.fit(text_train, y_train) CPU times: user 17.3 s, sys: 6.6 s, total: 23.9 s Wall time: 39.5 s /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations. "of iterations.", ConvergenceWarning) Let's print best$C$and cv-score using this hyperparameter: In [25]: grid_logit.best_params_, grid_logit.best_score_ Out[25]: ({'logisticregression__C': 0.1}, 0.8848) In [26]: plot_grid_scores(grid_logit, 'logisticregression__C') For the validation set: In [27]: grid_logit.score(text_test, y_test) Out[27]: 0.87812 Now let's do the same with random forest. We see that, with logistic regression, we achieve better accuracy with less effort. In [28]: from sklearn.ensemble import RandomForestClassifier In [29]: forest = RandomForestClassifier(n_estimators=200, n_jobs=-1, random_state=17) In [30]: %%time forest.fit(X_train, y_train) CPU times: user 1min 27s, sys: 77.3 ms, total: 1min 27s Wall time: 16.4 s Out[30]: RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=-1, oob_score=False, random_state=17, verbose=0, warm_start=False) In [31]: round(forest.score(X_test, y_test), 3) Out[31]: 0.855 ### XOR-Problem¶ Let's now consider an example where linear models are worse. Linear classification methods still define a very simple separating surface - a hyperplane. The most famous toy example of where classes cannot be divided by a hyperplane (or line) with no errors is "the XOR problem". XOR is the "exclusive OR", a Boolean function with the following truth table: XOR is the name given to a simple binary classification problem in which the classes are presented as diagonally extended intersecting point clouds. In [32]: # creating dataset rng = np.random.RandomState(0) X = rng.randn(200, 2) y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0) In [33]: plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired); Obviously, one cannot draw a single straight line to separate one class from another without errors. Therefore, logistic regression performs poorly with this task. In [34]: def plot_boundary(clf, X, y, plot_title): xx, yy = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50)) clf.fit(X, y) # plot the decision function for each datapoint on the grid Z = clf.predict_proba(np.vstack((xx.ravel(), yy.ravel())).T)[:, 1] Z = Z.reshape(xx.shape) image = plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto', origin='lower', cmap=plt.cm.PuOr_r) contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linetypes='--') plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired) plt.xticks(()) plt.yticks(()) plt.xlabel(r'$x_1$') plt.ylabel(r'$x_2$') plt.axis([-3, 3, -3, 3]) plt.colorbar(image) plt.title(plot_title, fontsize=12); In [35]: plot_boundary(LogisticRegression(solver='lbfgs'), X, y, "Logistic Regression, XOR problem") /opt/conda/lib/python3.6/site-packages/matplotlib/contour.py:1230: UserWarning: No contour levels were found within the data range. warnings.warn("No contour levels were found" /opt/conda/lib/python3.6/site-packages/matplotlib/contour.py:1004: UserWarning: The following kwargs were not used by contour: 'linetypes' s) But if one were to give polynomial features as an input (here, up to 2 degrees), then the problem is solved. In [36]: from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import Pipeline In [37]: logit_pipe = Pipeline([('poly', PolynomialFeatures(degree=2)), ('logit', LogisticRegression(solver='lbfgs' ))]) In [38]: plot_boundary(logit_pipe, X, y, "Logistic Regression + quadratic features. XOR problem") /opt/conda/lib/python3.6/site-packages/matplotlib/contour.py:1230: UserWarning: No contour levels were found within the data range. warnings.warn("No contour levels were found" /opt/conda/lib/python3.6/site-packages/matplotlib/contour.py:1004: UserWarning: The following kwargs were not used by contour: 'linetypes' s) Here, logistic regression has still produced a hyperplane but in a 6-dimensional feature space$1, x_1, x_2, x_1^2, x_1x_2$and$x_2^2$. When we project to the original feature space,$x_1, x_2\$, the boundary is nonlinear. In practice, polynomial features do help, but it is computationally inefficient to build them explicitly. SVM with the kernel trick works much faster. In this approach, only the distance between the objects (defined by the kernel function) in a high dimensional space is computed, and there is no need to produce a combinatorially large number of features. ### Useful resources¶ • Main course site, course repo, and YouTube channel • Medium "story" based on this notebook • Course materials as a Kaggle Dataset • If you read Russian: an article on Habrahabr with ~ the same material. And a lecture on YouTube • A nice and concise overview of linear models is given in the book “Deep Learning” (I. Goodfellow, Y. Bengio, and A. Courville). • Linear models are covered practically in every ML book. We recommend “Pattern Recognition and Machine Learning” (C. Bishop) and “Machine Learning: A Probabilistic Perspective” (K. Murphy). • If you prefer a thorough overview of linear model from a statistician’s viewpoint, then look at “The elements of statistical learning” (T. Hastie, R. Tibshirani, and J. Friedman). • The book “Machine Learning in Action” (P. Harrington) will walk you through implementations of classic ML algorithms in pure Python. • Scikit-learn library. These guys work hard on writing really clear documentation. • Scipy 2017 scikit-learn tutorial by Alex Gramfort and Andreas Mueller. • One more ML course with very good materials. • Implementations of many ML algorithms. Search for linear regression and logistic regression.
2019-07-17 01:11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2987063229084015, "perplexity": 2871.927419331629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00310.warc.gz"}
http://math.stackexchange.com/questions/301233/sullivan-model-of-the-odd-sphere
# Sullivan model of the odd sphere I want to determine the Sullivan model of an odd sphere $S^{2n+1}$. Let $(A,d)$ be a cdga such that $H^*(S^{2n+1};\mathbb Q)\cong H^*((A,d_A))$ as graded algebras. Hence $$H^*((A,d))\cong H^0(S^{2n+1})\oplus H^{2n+1}(S^{2n+1})$$ where $H^0(S^{2n+1})$ is isomorphic to the field $\mathbb Q$ with basis $\{1\}$ and $H^{2n+1}(S^{2n+1})$ is a one dimensional $\mathbb Q$-vector space with basis denoted by $\{e\}$. On the other hand we compute $H^*((A,d_A))$ going back to the definition. Write $A=\bigoplus_{i\geq 0}A^i$ then we have $$0\longrightarrow A^0 \xrightarrow{d_0} A^1 \xrightarrow{d_1}A^2 \xrightarrow{d_1}\cdots A^{2n} \xrightarrow{d_{2n}} A^{2n+1} \xrightarrow{d_{2n+1}} A^{2n+2} \cdots$$ Here $H^0((A,d))$ is always supposed to be isomorphic to the field $\mathbb Q$ with basis $\{1\}$. And $H^{2n+1}((A,d))=Ker(d_{2n+1})/Im(d_{2n})$ which we said is isomorphic to $H^{2n+1}(S^{2n+1})$. Hence $Ker(d_{2n+1})/Im(d_{2n})$ is a one dimensional $\mathbb Q$-vector space with basis the class $[v]$ of a cocycle $v\in Ker(d_{2n+1})\subset A^{2n+1}$. Let $V$ denote the vector space with basis the cocyle $v$. The free cdga on $V$ is an exterior algebra on $v$ denoted $\Lambda V$. This is a cdga with grading $\Lambda V=\Lambda^0V\oplus\Lambda^1V$ where $\Lambda^0V$ is isomorphic to the field $\mathbb Q$ with basis $\{1\}$ and $\Lambda^1V$ is the one dimensional $\mathbb Q$-vector space $V$ with basis the cocycle $v$. We claim that the cdga $\Lambda V$ with zero differential is quasi-isomorphic to $(A,d)$. An extension result assures that the cdga inclusion $(V,0)\hookrightarrow (A,d)$ extends to a unique cdga morphism $$m:(\Lambda V,0)\hookrightarrow (A,d)$$ Now we show that the induced map in cohomology $H(m):H^*((\Lambda V,0))\longrightarrow H^*((A,d))$ is an isomorphism. where the cdga $(\Lambda V,0)$ is written $$0\longrightarrow \Lambda^0 V=\mathbb Q \xrightarrow{0} \Lambda^1 V=V \xrightarrow{0}0 \xrightarrow{0}\cdots 0$$ Thus $H^*((\Lambda V,0))=H^0((\Lambda V,0))\oplus H^1((\Lambda V,0))$ while $H^*((A,d))=H^0((A,d))\oplus H^{2n+1}((A,d))$. The problem now is that $H(m)_1:H^1((\Lambda V,0))\longrightarrow H^1((A,d))=0$ is not an iso. Also $H(m)_{2n+1}:H^{2n+1}((\Lambda V,0))=0\longrightarrow H^{2n+1}((A,d))$ is not an iso.. thank you for correcting any detail in the above computations and to helping me prove that $m$ is a quasi-isomorphism. - The grading on $\Lambda V$ is wrong -- you still want $\nu$ to be in degree $2n+1$. Fixing this solves both problems. All the other cohomology of $\Lambda V$ and $(A,d)$ is zero, so once you've established that $H(m)$ is an isomorphism in degrees $0$ and $2n+1$, it follows that $m$ is a quasi-isomorphism. Here's a general algorithm for constructing Sullivan minimal models of simply connected dgas, half because I feel like you deserve a better answer and half because I just learned it. Inductively let $f_n:(M(n),d_{M(n)}) \to (A,d_A)$ be a morphism with $M(n)$ a minimal model and $f_n$ inducing an isomorphism on cohomology in degrees $<n$ and an injection on cohomology in degree $n$. Recall that the mapping cone of $f_n$ is defined as $C(n)^k = M(n)^k \oplus A^{k-1}$, with differential $d_{C(n)}(m,a) = (d_{M(n)}(m), f_n(m) - d_A(a))$. We define relative cohomology as $H^k(M(n), A) = H^k(C(n))$. If $V(n) = H^{n+2}(M(n), A)$, then $V(n)$ is a quotient of $Z^{n+2}(C(n))$; choosing a splitting of this map, and projecting to $M(n)^{n+2}$, gives a map $g:V(n) \to Z^{n+2}(M(n))$. We now define $M(n+1)$ to be the tensor product of $M(n)$ and the free cdga generated by $V(n)$ in degree $n+1$, with $d_{M(n+1)}(v) = g(v)$ for $v \in V(n)$. The colimit of these $M(n)$ will evidently be a Sullivan model for $A$. It's an (only moderately difficult) exercise to show that each $M(n)$ is well-defined and that they have the described properties by induction. It's important that $A$ be simply connected, so that when adding the new generators in degree $n+1$ we don't get any undesired ones in degree $n+2$. If you use this method, then so long as you get the degrees right, you'll know you're on the right track since you'll always have the right cohomology up to a certain degree; if the cohomology of $A$ is bounded above, as in the case of $S^{2n+1}$, the process will actually stop. You can and should think of the $M(n)$ as a Postnikov tower for $A$. thank you Paul, i will read the details of your answer and try to understand it.. But first i want to know why the grading is wrong? a free cdga on a one dimensional vector space has only one grading $\Lambda V=\Lambda^0 V\oplus \Lambda^1V$ where $\Lambda^0 V=\mathbb Q$ and $\Lambda^1V\cong V$, do you suggest that $\Lambda V=\Lambda^0 V\oplus \Lambda^{2n+1}V$? is this possible, can we shoose any grading on $\Lambda V$? –  palio Feb 13 '13 at 8:57 I believe you can since you are constructing the model. You are really picking a graded vector space and then applying $\Lambda$, if I recall correctly... –  Sean Tilson Feb 13 '13 at 14:43 @palio: Don't think of it as a free cdga on a vector space. Think of it as a free cdga on a graded vector space. Since the generator $v$ has odd degree, this must be an exterior algebra. If $v$ had even degree, the free cdga on it would be a polynomial algebra, and you'd have to have extra classes to kill off its powers, which explains the difference between the odd and even spheres. –  Paul VanKoughnett Feb 14 '13 at 5:32 The grading on $\Lambda V$ is defined by $\Lambda V=\oplus_{i\geq 0}\Lambda^iV$ where $\Lambda^iV$ consists of words of length $i$. Here $v$ is of length $1$ so it must be in $\Lambda^1V$, hence $\Lambda V=\Lambda^0V\oplus \Lambda^1V$ is the only possible grading in this case.. am i wrong???? –  palio Oct 15 '13 at 10:07 You're right, but $V = \Lambda^1 V$ has degree $2n+1$, by definition. You can pick the grading on the vector space to begin with, and then apply the exterior algebra construction. –  Paul VanKoughnett Oct 17 '13 at 2:39
2015-05-30 10:57:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374971985816956, "perplexity": 109.17410691450804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930995.36/warc/CC-MAIN-20150521113210-00291-ip-10-180-206-219.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/114360/parsing-bbl-file-to-an-xml-file-from-within-pdflatex-while-typesetting
# parsing bbl file to an xml file from within pdflatex while typesetting As editor of a journal I need to translate the bibliography, in a bbl file, into part of an xml file (for potential submission to cross-ref). It seems best done on the fly by pdfLaTeX so I can also record in the xml the main metadata (authors, title, pages, volume, etc). Currently I use a scheme for bbl files where the \bibitem does not involve brackets, e.g. \bibitem{Smith99} ... The problem is that as soon as the bbl file uses brackets (as in natbib) then my scheme fails catastrophically on \bibitem[blah]{Smith99} ... Question: how can I get pdfLaTeX to parse a bbl file, with such \bibitem[]{}, into an xml file while pdflatex is typesetting to pdf? Currently I do the following (hacked from somewhere in latex and executed \AtBeginDocument): \renewcommand{\bibitem}[1]{</unstructured_citation></citation> ^^J<citation key="#1"><unstructured_citation>} ... \def\j@@Input{% \let\jrnltempb\jrnltempa \ifeof\jrnlin \immediate\closein\jrnlin \else \immediate\write\jrnlout{\jrnltempb} \expandafter\j@@Input \fi} \typeout{**** Starting to write the bibliography to the xml.} \immediate\openin\jrnlin\jobname.bbl\relax \immediate\write\jrnlout{<citation_list> ^^J<citation key="nil"><unstructured_citation>} Try using xparse, \usepackage{xparse}\DeclareExpandableDocumentCommand{\bibitem}{o{}m}{</unstruct‌​ured_citation></citation>^^J<citation key="#2" opt-arg="#1"><unstructured_citation>}. – Bruno Le Floch May 15 '13 at 11:59 Does this have to be done with the .bbl file? I've seen .bst files that deliberately add an XML section to the .bbl from BibTeX, which is easier as the data contains no formatting. – Joseph Wright Aug 16 '13 at 10:23
2016-05-27 20:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330886006355286, "perplexity": 5376.957861752498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277091.27/warc/CC-MAIN-20160524002117-00197-ip-10-185-217-139.ec2.internal.warc.gz"}
http://scenelink.org/no-bounding/cannot-determine-size-of-graphic-in-no-boundingbox-graphicx.php
Home > No Bounding > Cannot Determine Size Of Graphic In No Boundingbox Graphicx # Cannot Determine Size Of Graphic In No Boundingbox Graphicx ## Contents In fact, I could compile it fine with MacTex on my machine. Your post was migrated here from another Stack Exchange site. Use pdftex. Existence proof of Lorentz transformation from lightlike to lightlike vectors why isn't the interaction of the molecules with the walls of the container (in an ideal gas) assumed negligible? news ## Latex Error Cannot Determine Size Of Graphic No Boundingbox So remove it. \documentclass[sts]{imsart} In general dvips, pdftex or xetex should not be used even with packages that know them, such as graphicx hyperref geometry that can deduce the needed driver Tank-Fighting Alien RaspberryPi serial port Advisor professor asks for my dissertation research source-code more hot questions question feed lang-tex about us tour help blog chat data legal privacy policy work here Are “Referendum” and “Plebiscite” the same in the meaning, or different in the meaning and nuance? How to justify Einstein notation manipulations without explicitly writing sums? • If you wish to supply a bounding box explicitly to an \includegraphics command whilst specifying the width the syntax is \includegraphics[width=0.8\linewdith,bb=0 0 100 100]{figurefile} with options one set of square brackets • Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the bmpsize package can also used to replace extractbb. –Leo Liu May 8 '11 at 15:15 5 Thank you for the solution provided. However when I add citations I have to compile in latex and I get this error. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed ! Latex Error: Option Clash For Package Graphicx. jpg, as explained here. more hot questions about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science Other Stack No Bounding Box Error In Latex Are “Referendum” and “Plebiscite” the same in the meaning, or different in the meaning and nuance? Latex however returns this error: "Cannot determine size of graphic" My graphic is exported from PowerPoint, so I have tried both .pdf and .png. Can I hint the optimizer by giving the range of an integer? graphics include bounding-box share|improve this question asked Feb 10 '13 at 18:06 esmitex 4081820 closed as too localized by lockstep, Stefan Kottwitz♦ Apr 10 '13 at 10:41 This question is unlikely Natwidth Latex I looked at other answers. Not the answer you're looking for? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed ## No Bounding Box Error In Latex Why is using let inside a for loop so slow on Chrome? http://tex.stackexchange.com/questions/289629/latex-error-cannot-determine-size-of-graphic-no-boundingbox I get an error when I insert a picture from a pdf file. Latex Error Cannot Determine Size Of Graphic No Boundingbox Is there a word for turning something into a competition? Latex No Bounding Box Pdf How can I solve this problem? See the LaTeX manual or LaTeX Companion for explanation. navigate to this website LaTeX Error: Cannot determine size of graphic in xfig/ch1/doubsq.pdf (no Boun dingBox). They are, in turn: lower-left-x (llx), lower-left-y (lly), upper-right-x (urx), and upper-right-y (ury). So you're loading graphicx with the dvips option, which allows only EPS graphics to be imported and is nonsense when the engine is pdflatex. Latex No Bounding Box Jpg Real numbers which are writable as a differences of two transcendental numbers How can a Cleric be proficient in warhammers? What is the text to the left of a command (as typed in a terminal) called? It just depends whether you're using LaTeX or pdfLaTeX as to which you'd choose. –Will Robertson Apr 9 '09 at 2:14 2 Should we migrate this (and other tex questions) More about the author UPDATE : It finally worked. How can I trust that this is Google? Latex Error File Dvipdfm Def Not Found share|improve this answer answered Apr 8 '09 at 21:45 Tom 10.3k42941 yeah. This is not supposed to happen for valid eps files, as they should contain a BoundingBox comment and the graphicx package should be able to read it. ## In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms n-dimensional circles! I get the same errors for both. natwidth=... (not width= as that tries to scale to that size but still needs the natural size. Convert Png To Eps Join them; it only takes a minute: Sign up Error including image in Latex [closed] up vote 29 down vote favorite 6 I am getting the following error while compiling my Singular cohomology and birational equivalence Prove that the following statements for a ring R are equivalent: First skills to learn for mountaineering For a better animation of the solution from NDSolve Converted the .eps file to a .pdf file using the perl program epstopdf 2. It probably has to do with what compiler you are using. click site I typed the following in Texmaker \begin{document} \begin{figure}[!ht] \centering \includegraphics[scale=1]{figure} \end{figure} \end{document} The error I get is ! Hope this helps. Somebody suggested to compile pdfLatex instead of Latex, but on TexShop I do not see how to do it. Another question: how can I produce a .pdf file from a .tex file? Why do I get this error ? Is there any known limit for how many dice RPG players are comfortable adding up? I have also tried using the option \usepackage[pdftex]{graphicx} but this doesn't help. I changed one method signature and broke 25,000 other classes. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the I'm working on debian using emacs graphics errors png share|improve this question edited Jan 22 '13 at 15:51 Martin Schröder 11.2k43196 asked Jan 22 '13 at 15:02 user24726 41112 marked as asked 7 years ago viewed 98314 times active 3 years ago Related 3Included LaTeX figures do not show in dvi but do in pdf3Dimension Preserving JPEG to EPS Conversion1LaTeX porting *.eps You could also change your PDF into an eps, if you really want to work with LaTeX further on. –Ronny Dec 9 '13 at 5:10 The syntax for includegraphics
2018-02-22 00:58:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001153349876404, "perplexity": 3701.4915802968353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00084.warc.gz"}
https://www.icaml.org/canon/data/images-videos/Python_Convolution/Python_Convolution.html
Convolution in Python This tutorial covers the implementation of a convolution in Python. Definition • I: Image to convolve. • H: filter matrix to convolve the image with. • J: Result of the convolution. The following graphics shows exemplary the mathematical operations of the convolution. The filter matrix H is shifted over the input image I. The values ‘under’ the filter matrix are multiplicated with the corresponding values in H, summed up and writen to the result J. The target position is usually the position under the center of H. Example: Blurring with a square block filter In order to implement the convolution with a block filter, we need two methods. The first one will create the block filter matrix H depending on the filter width/height n. A block filter holds the value $\dfrac{1}{n\cdot n}$ at each position: import numpy as np def block_filter(n): H = np.ones((n, n)) / (n * n) # each element in H has the value 1/(n*n) return H We will test the method by creating a filter with n = 5: H = block_filter(5) print(H) [[0.04 0.04 0.04 0.04 0.04] [0.04 0.04 0.04 0.04 0.04] [0.04 0.04 0.04 0.04 0.04] [0.04 0.04 0.04 0.04 0.04] [0.04 0.04 0.04 0.04 0.04]] Next, we define the actual convolution operation. To prevent invalid indices at the border of the image, we introduce the padding p. def apply_filter(I, H): h, w = I.shape # image dimensions (height, width) n = H.shape[0] # filter size p = n // 2 # padding size J = np.zeros_like(I) # output image, initialized with zeros for x in range(p, h-p): for y in range(p, w-p): J[x, y] = np.sum(I[x-p:x+n-p, y-p:y+n-p] * H) return J To test our method we create a example image: I = np.zeros((200, 200), dtype=np.float) for x in range(200): for y in range(200): d = ((x-100)**2+(y-100)**2)**0.5 I[x, y] = d % 8 < 4 We will use mMatplotlib to visualize the image: import matplotlib.pyplot as plt plt.imshow(I, cmap='gray',vmin=0.0, vmax=1.0) plt.axis('off') plt.show() Next we test our implementation and apply a block filter with size 7: n = 7 H = block_filter(n) J = apply_filter(I, H) plt.imshow(J, cmap='gray',vmin=0.0, vmax=1.0) plt.axis('off') plt.show() We can observe the blurring effect of the filter as well as the black border around the image, where no values were computed. To remove the black border one can increase the size of I by the filter padding p. This is usually done by appending zeros around the image or repeating/ mirroring the original borders.
2020-07-02 14:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3525715172290802, "perplexity": 3539.734201954763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879532.0/warc/CC-MAIN-20200702142549-20200702172549-00279.warc.gz"}
http://openstudy.com/updates/4d6c7f4270088b0be52aa90b
• anonymous Suppose that Bob is paid two times his normal hourly rate for each hour worked in excess of 40 hours in a week. Last week he earned $1200 for 50 hours of work. What is his hourly wage? Mathematics At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. • anonymous Suppose that Bob is paid two times his normal hourly rate for each hour worked in excess of 40 hours in a week. Last week he earned$1200 for 50 hours of work. What is his hourly wage? Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-03-24 10:42:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2675267457962036, "perplexity": 1394.5689476551916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187792.74/warc/CC-MAIN-20170322212947-00287-ip-10-233-31-227.ec2.internal.warc.gz"}
https://paulvanderlaken.com/tag/computerscience/
Tag: computerscience # A free, self-taught education in Computer Science! The Open Source Society University offers a complete education in computer science using online materials. They offer a proper introduction to the fundamental concepts for all computing disciplines. Evyerthing form algorithms, logic, and machine learning, up to databases, full stack web development, and graphics is covered. Moreover, you will acquire skills in a variety of languages, including Python, Java, C, C++, Scala, JavaScript, and many more. According to their GitHub page, the curriculum is suited for people with the discipline, will, and good habits to obtain this education largely on their own, but who’d still like support from a worldwide community of fellow learners. ## Curriculum • Intro CS: for students to try out CS and see if it’s right for them • Core CS: corresponds roughly to the first three years of a computer science curriculum, taking classes that all majors would be required to take • Advanced CS: corresponds roughly to the final year of a computer science curriculum, taking electives according to the student’s interests • Final Project: a project for students to validate, consolidate, and display their knowledge, to be evaluated by their peers worldwide • Pro CS: graduate-level specializations students can elect to take after completing the above curriculum if they want to maximize their chances of getting a good job It is possible to finish Core CS within about 2 years if you plan carefully and devote roughly 18-22 hours/week to your studies. Courses in Core CS should be taken linearly if possible, but since a perfectly linear progression is rarely possible, each class’s prerequisites are specified so that you can design a logical but non-linear progression based on the class schedules and your own life plans. # Turning the Traveling Salesman problem into Art Robert Bosch is a professor of Natural Science at the department of Mathematics of Oberlin College and has found a creative way to elevate the travelling salesman problem to an art form. For those who aren’t familiar with the travelling salesman problem (wiki), it is a classic algorithmic problem in the field of computer science and operations research. Basically, we want are looking for a mathematical solution that is cheapest, shortest, or fastest for a given problem. Most commonly, it is seen as a graph (network) describing the locations of a set of nodes (elements in that network). Wikipedia has a description I can’t improve on: The Travelling Salesman Problem describes a salesman who must travel between N cities. The order in which he does so is something he does not care about, as long as he visits each once during his trip, and finishes where he was at first. Each city is connected to other close by cities, or nodes, by airplanes, or by road or railway. Each of those links between the cities has one or more weights (or the cost) attached. The cost describes how “difficult” it is to traverse this edge on the graph, and may be given, for example, by the cost of an airplane ticket or train ticket, or perhaps by the length of the edge, or time required to complete the traversal. The salesman wants to keep both the travel costs, as well as the distance he travels as low as possible. Wikipedia Here’s a visual representation of the problem and some algorithmic approaches to solving it: Now, Robert Bosch has applied the traveling salesman problem to well-know art pieces, trying to redraw them by connecting a series of points with one continuous line. Robert even turned it into a challenge so people can test out how well their travelling salesman algorithms perform on, for instance, the Mona Lisa, or Vincent van Gogh. Just look at the detail on these awesome Dutch classics: P.S. Why do Brits and Americans have this spelling feud?! As a non-native, I never know what to pick. Should I write modelling or modeling, travelling or traveling, tomato or tomato? I got taught the U.K. style, but the U.S. style pops up whenever I google stuff, so I am constantly confused! Now I subconciously intertwine both styles in a single text… # The wondrous state of Computer Vision, and what the algorithms actually “see” The field of computer vision tries to replicate our human visual capabilities, allowing computers to perceive their environment in a same way as you and I do. The recent breakthroughs in this field are super exciting and I couldn’t but share them with you. In the TED talk below by Joseph Redmon (PhD at the University of Washington) showcases the latest progressions in computer vision resulting, among others, from his open-source research on Darknet – neural network applications in C. Most impressive is the insane speed with which contemporary algorithms are able to classify objects. Joseph demonstrates this by detecting all kinds of random stuff practically in real-time on his phone! Moreover, you’ve got to love how well the system works: even the ties worn in the audience are classified correctly! PS. please have a look at Joseph’s amazing My Little Pony-themed resumé. The second talk, below, is more scientific and maybe even a bit dry at the start. Blaise Aguera y Arcas (engineer at Google) starts with a historic overview brain research but, fortunately, this serves a cause, as ~6 minutes in Blaise provides one of the best explanations I have yet heard of how a neural network processes images and learns to perceive and classify the underlying patterns. Blaise continues with a similarly great explanation of how this process can be reversed to generate weird, Asher-like images, one could consider creative art: Blaise’s colleagues at Google took this a step further and used t-SNE to visualize the continuous space of animal concepts as perceived by their neural network, here a zoomed in part on the Armadillo part of the map, apparently closely located to fish, salamanders, and monkeys? We’ve seen these latent spaces/continua before. This example Andrej Karpathy shared immediately comes to mind: Blaise’s presentaton you can find here: If you want to learn more about this process of image synthesis through deep learning, I can recommend the scientific papers discussed by one of my favorite Youtube-channels, Two-Minute Papers. Karoly’s videos, such as the ones below, discuss many of the latest developments: Let me know if you have any other video’s, papers, or materials you think are worthwhile! # Sorting Algorithms 101: Visualized Sorting is one of the central topic in most Computer Science degrees. In general, sorting refers to the process of rearranging data according to a defined pattern with the end goal of transforming the original unsorted sequence into a sorted sequence. It lies at the heart of successful businesses ventures — such as Google and Amazon — but is also present in many applications we use daily — such as Excel or Facebook. Many different algorithms have been developed to sort data. Wikipedia lists as many as 45 and there are probably many more. Some work by exchanging data points in a sequence, others insert and/or merge parts of the sequence. More importantly, some algorithms are quite effective in terms of the time they take to sort data — taking only $n$ time to sort $n$ datapoints — whereas others are very slow — taking as much as $n^2$. Moreover, some algorithms are stable — in the sense that they always take the same amount of time to process $n$ datapoints — whereas others may fluctuate in terms of processing time based on the original order of the data. I really enjoyed this video by TED-Ed on how to best sort your book collection. It provides a very intuitive introduction into sorting strategies (i.e., algorithms). Moreover, Algorithms to Live By (Christian & Griffiths, 2016) provided the amazing suggestion to get friends and pizza in whenever you need to sort something, next to the great explanation of various algorithms and their computational demand. The main reason for this blog is that I stumbled across some nice video’s and GIFs of sorting algorithms in action. These visualizations are not only wonderfully intriguing to look at, but also help so much in understanding how the sorting algorithms process the data under the hood. You might want to start with the 4-minute YouTube video below, demonstrating how nine different sorting algorithms (Selection Sort, Shell Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Bubble Sort, Comb Sort, & Cocktail Sort) process a variety of datasets. This interactive website toptal.com allows you to play around with the most well-known sorting algorithms, putting them to work on different datasets. For the grande finale, I found these GIFs and short video’s of several sorting algorithms on imgur. In the visualizations below, each row of the image represents an independent list being sorted. You can see that Bubble Sort is quite slow: Cocktail Shaker Sort already seems somewhat faster, but still takes quite a while. For some algorithms, the visualization clearly shows that the settings you pick matter. For instance, Heap Sort is much quicker if you choose to shift down instead of up. In contrast, for Merge Sort it doesn’t matter whether you sort by breadth first or depth first. The imgur overview includes many more visualized sorting algorithms but I don’t want to overload WordPress or your computer, so I’ll leave you with two types of Radix Sort, the rest you can look up yourself! # Multi-Armed Bandits: The Smart Alternative for A/B Testing Just as humans, computers learn by experience.The purpose of A/B testing is often to collect data to decide whether intervention A or B is better. As such, we provide one group with intervention A whereas another group receives intervention B. With the data of these two groups coming in, the computer can statistically estimate which intervention (A or B) is more effective. The more data the computer has, the more certain the estimate is. Here, a trade-off exists: we need to collect data on both interventions to be certain which is best. But we don’t want to conduct an inefficient intervention, say B, if we are quite sure already that intervention A is better. In his post, Corné de Ruijt of Endouble writes about multi-armed bandit algorithms, which try to optimize this trade-off: “Multi-armed bandit algorithms try to overcome the high missed opportunity cost involved in learning, by exploiting and exploring at the same time. Therefore, these methods are in particular interesting when there is a high lost opportunity cost involved in the experiment, and when exploring and exploiting must be performed during a limited time interval. In the full article, you can read Corné’s comparison of this multi-armed bandit approach to the traditional A/B testing approach using a recruitment and selection example. For those of you who are interested in reading how anyone can apply this algorithm and others to optimize our own daily decisions, I highly recommend the book Algorithms to Live By: The Computer Science of Human Decisions available on Amazon or the Dutch bol.com.
2021-08-03 00:05:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2986348271369934, "perplexity": 1247.8394945474922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00030.warc.gz"}
http://math.stackexchange.com/questions/1649/finding-the-extrema-non-differentiable-functions
# Finding the Extrema Non-differentiable Functions. Are there any examples of solving for the global maximum of a non-differentiable function where you: 1. Construct a series of differentiable functions that approach the non-differentiable function in the limit 2. Show the maximum of each differentiable function converges to some value, which is thus your answer. For all I know, the procedure above is fatally flawed (or there are trivial examples, I would be most interested in non-trivial examples) in some way, if that is the case let me know. I am specifically interested in examples involving absolute values. - Could you please clarify the question? Are you looking for examples or counter-examples? – Tomer Vromen Aug 5 '10 at 19:45 Examples, let me know what was confusing and I will edit the question. – Jonathan Fischoff Aug 5 '10 at 19:53 ## 3 Answers A simple example: Let $F_n(x) = \sqrt{x^2+2^{-n}}$. It is not hard to show that $F_n(x) \to \sqrt{x^2} = |x|$. Every $F_n$ is differentiable and has a local minimum at 0, and indeed so does |x|. Let me know if this is what you're looking for. - That is not what I wanted because the right and left derivatives are different for $\sqrt{x^2}$ at zero just like $\left | x \right |$. – Jonathan Fischoff Aug 5 '10 at 20:04 $\sqrt{x^2}$ is $|x|$. Every function in the sequence $F_n$ is differentiable at 0, and the sequence approaches the limit $|x|$. – Tomer Vromen Aug 5 '10 at 20:13 That maybe so, but is every $F_n$ a differentiable function, i.e. has a derivative for each point in its domain? – Jonathan Fischoff Aug 5 '10 at 20:20 I honestly don't know that's why I am asking. – Jonathan Fischoff Aug 5 '10 at 20:33 Okay, my bad. I got out the pen and paper, and it looks you won't get a singularity. – Jonathan Fischoff Aug 5 '10 at 21:10 The approach you outlined is commonly used in practice. If your original problem has some nice properties, such as convexity, the approach will work well. For example, the soft maximum is a common way to construct a series of smooth, convex approximations to the maximum function. - Ha, I actually read part of your blog, but didn't realize the soft maximum approached the function in the limit. Was actually trying to help solve this problem: metaoptimize.com/qa/questions/1715/… which may or not be a good use of the soft maximum – Jonathan Fischoff Aug 5 '10 at 21:30 Here's a stab at why the problem is hard for highly non-differentiable but continuous functions. So say $f$ is nowhere differentiable on $[a,b]$. I claim that there are either infinitely or no local extrema of $f$. (By local extrema, I mean extrema that occur in the interior of the interval $[a,b]$.) Indeed, suppose there were finitely many, say $c_1 < c_2<\dots f(c_2)$. Then we can take the global maximum of $f$ on $[c_1, c_2]$, which occurs at an interior point; it is thus a local extremum of $f$. If $f$ is a monotone function, then it is a theorem of Lebesgue that it is differentiable a.e. In particular, the above reasoning shows that the existence of finitely many local extrema implies that $f$ is differentiable a.e. - @Jonathan: I was talking about extrema not at the endpoints. – Akhil Mathew Aug 5 '10 at 21:23 What if you know it has a global maximum, does that change anything? – Jonathan Fischoff Aug 5 '10 at 21:37 are you assuming that $f$ is continuous? – Pete L. Clark Aug 6 '10 at 5:04 Yes, I was (edited again for clarity!). – Akhil Mathew Aug 6 '10 at 11:44 I know this is an old post, but the second paragraph needs work. You probably did not mean $c_2<\dots f(c_2)$. And the part about maximum on $[c_1,c_2]$ being interior is somewhat unclear. (There was a question about it recently: math.stackexchange.com/q/539563) – user103402 Oct 29 '13 at 3:57
2016-06-26 06:39:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051662683486938, "perplexity": 315.2833390017828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00066-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.markowitzoptimizer.pro/wiki/CompoundRateVariance
# Wiki Wiki Web ## Compound Rate Variance The following is a basic rederivation of the so called 1/3 adjustment industry standard for compound rates. ### Notations and Model Setup Using HJM notations, we define a log libor rate $L_t$ observed at $t$ for the period from $T_s$ to $T_e$, where $t<T_s$ from instantaneous forward $f(t,u)$: $$L_t = \frac{1}{T_e-T_s} \int_{T_s}^{T_e} f(t,u)du$$ A log compound rate $R_t$ has observation extending to $t<T_e$ $$R_t = \frac{1}{T_e-T_s} \int_{T_s}^{T_e} f(t \wedge u ,u)du$$ Where the risk neutral diffusion for $f(t,u)$ is given by HJM equation: $$df(t,u) = \sigma(t,u)v(t,u) dt + \sigma(t,u) dW_t$$ The actual Libor $\tilde{L}$ and actual compound rate $\tilde{R}$ are linked to the log rates by the relation: \begin{eqnarray} \ln (1+ (T_e-T_s) \tilde{R}_t) &=& R_t (T_e-T_s) \\ \ln (1+ (T_e-T_s) \tilde{L}_t) &=& L_t (T_e-T_s) \end{eqnarray} A normal rate diffusion of $f(t,u)$ leads to a normal diffusion of $R_t$ and a shifted lognormal diffusion of $\tilde{R}_t$. We will focus on the distribution of $R_t$. ### Derivation of the Log Rate Variance As we are interested in variance, we will only track the diffusion terms and collect the drift terms into a a term $C$ that is deterministic when the forward volatility $\sigma(t,u)$ is deterministic. We have \begin{eqnarray} R_t &=& C + \frac{1}{T_e-T_s} \int_{u=T_s}^{T_e} \int_{t=T_0}^u \sigma(t,u) dW_t du \end{eqnarray} We can distinguish 2 cases for a given calculate date $T_0$: • forward period $T_0<T_s$ • current period $T_s<T_0<T_e$ #### Forward Period if the period is future: $T_0<T_s$ \begin{eqnarray} R_t &=& C + \frac{1}{T_e-T_s} \int_{u=T_s}^{T_e} \int_{t=T_0}^u \sigma(t,u) dW_t du \\ &=& C + \frac{1}{T_e-T_s} \int_{u=T_s}^{T_e} \int_{t=T_0}^{T_s} \sigma(t,u) dW_t du + \frac{1}{T_e-T_s} \int_{u=T_s}^{T_e} \int_{t=T_s}^u \sigma(t,u) dW_t du \\ R_t &=& C + \frac{1}{T_e-T_s} \int_{t=T_0}^{T_s} \int_{u=T_s}^{T_e} \sigma(t,u) du dW_t + \frac{1}{T_e-T_s} \int_{t=T_s}^{T_e} \int_{u=t}^{T_e} \sigma(t,u) du dW_t \end{eqnarray} assuming single factor, 0 mean reversion, constant volatility, $\sigma(t,u) := \sigma$, we get: \begin{eqnarray}R_t &=& C + \frac{1}{T_e-T_s} \int_{t=T_0}^{T_s} \sigma (T_e-T_s) dW_t + \frac{1}{T_e-T_s} \int_{t=T_s}^{T_e} \sigma (T_e-t) dW_t \end{eqnarray} Using Ito's isometry: $$Var(R_t) = \sigma^2 (T_s-T_0) + \sigma^2 \frac{1}{3} (T_e-T_s)$$ #### Current Period if the period is current: $T_s<T_0<T_e$ \begin{eqnarray} R_t &=& C + \frac{1}{T_e-T_s} \int_{u=T_0}^{T_e} \int_{t=T_0}^u \sigma(t,u) dW_t du \\ &=& C + \frac{1}{T_e-T_s} \int_{t=T_0}^{T_e} \int_{u=t}^{T_e} \sigma(t,u) du dW_t \\ &=& C + \int_{t=T_0}^{T_e} \sigma \frac{T_e-t}{T_e-T_s} dW_t \end{eqnarray} Using Ito's isometry: $$Var(R_t) = \sigma^2 \frac{1}{3} (T_e-T_0) \left(\frac{T_e-T_0}{T_e-T_s} \right)^2$$ ### Expression as a Time Adjustment The variance adjustment can be reexpressed as a time to expiry where the parameter $T$ to be put in black formula invoked with a calculation date $T_0$ for a period $[T_s, T_e]$ is given by: \begin{eqnarray} T_{0s} &=& (T_s-T_0)^+ \\ T_{0e} &=& (T_e-T_0)^+ = T_e - T_0 \\ T&=& T_{0s} + \frac{1}{3} (T_{0e} - T_{0s}) \left( \frac{T_{0e} - T_{0s}}{T_e-T_s} \right)^2 \end{eqnarray}
2021-05-08 12:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999619722366333, "perplexity": 9031.353129652052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00493.warc.gz"}
https://galileoandeinstein.phys.virginia.edu/Elec_Mag/2022_Lectures/EM_27_Dielectrics_I.html
# 27.  Dielectrics I:  Polarization ## Introduction We’ve reached Jackson section 4.3, he talks of “ponderable” media, which apparently means (from the dictionary) media other than the aether.  In other words, pretty much anything.  We’ll assume solids, possibly occasionally liquids, but no gases. We've already discussed metals, which are solids containing highly mobile electrons in sufficient numbers that when a piece of metal is placed in a static electric field, these "free" electrons (meaning not bound locally in molecules, but they can’t get out of the solid) move to the surfaces until the interior of the metal is completely shielded from the field. Solids containing no (or very few) free electrons are often referred to as dielectrics.  The electric field penetrates the solid (because not enough electrons can move to the surfaces to shield it out), and molecules inside the solid are distorted in response to the field$—$their bound electrons can move within the atom or molecule, so an external field can induce a dipole moment. Microscopically, of course, the electric field varies wildly on an atomic scale. When we talk about the electric field inside the dielectric, we mean the average electric field, meaning the field averaged over many atoms, but still over a region small by macroscopic standards, and small compared with the distances over which that field changes significantly.  We could take a spherical volume containing, say, 10,000 atoms, but give it a fuzzy edge, so there are no sudden jumps in enclosed charge or enclosed electric field on moving it a little. What can we say about this averaged, or macroscopic, field inside a dielectric placed in an external field?  This is electrostatics, so we assume any distortions of molecules, etc., that took place as the external field was switched on have settled down. Obviously, the averaged field must satisfy $\stackrel{\to }{\nabla }×\stackrel{\to }{E}=0,$ because the work done on a charge as it moves along a path inside the solid is $\underset{{\stackrel{\to }{r}}_{1}}{\overset{{\stackrel{\to }{r}}_{2}}{\int }}q\stackrel{\to }{E}\cdot \stackrel{\to }{dl}$ and if this is nonzero around a closed path, we have a perpetual motion machine. It follows that $\stackrel{\to }{E}=-\stackrel{\to }{\nabla }\phi$ for both the microscopic (atomic scale) and the macroscopic (averaged) electric fields. To make further progress, we need a plausible approximation to the distortion of the molecular charge distribution caused by an imposed electric field, and the new field generated by this distortion. ## Electric Polarization We define the polarization $\stackrel{\to }{P}\left(\stackrel{\to }{r}\right)$ as the local induced dipole density.  Following Jackson, we assume the material is locally homogeneous, in general containing different molecules, each molecule having zero net charge, and in nonzero external field molecules of type $i$, with local density ${n}_{i}\left(\stackrel{\to }{r}\right),$ have mean polarization $〈{\stackrel{\to }{p}}_{i}〉$ (which we're taking to be linear in the imposed field) so the macroscopic polarization $\stackrel{\to }{P}\left(\stackrel{\to }{r}\right)=\sum _{i}〈{\stackrel{\to }{p}}_{i}〉{n}_{i}\left(\stackrel{\to }{r}\right).$ Zangwill, by the way, points out that this is really oversimplified: with modern density functional quantum techniques, we can construct reasonably accurate maps of the charge density variations in a solid, and how they respond to an increasing imposed electric field.  A moment's thought will make clear that we can't really think of a polarized solid as a lattice of elementary dipoles, it's a solid, so there are valence bonds holding everything together, these bonds are electrons, they too will respond to the electric field.  Zangwill gives as an example a computed picture of an ionic crystal where some of the inner ionic orbits shift the opposite way in an external field, a result of exchange interactions with outer shell electrons. Nevertheless, a dielectric in zero field has, on a macroscopic scale, the negative charge density exactly balancing the positive charge density, and on cranking up an applied field, the positive nuclei will move very slightly, the electron orbits will distort to cause an overall shift in the negative charge distribution in the direction opposite to the field, and, even though different electron orbit will move by different amounts, some even possibly negative, these are all small shifts and can be represented in total (on a scale of thousands of atoms or more) by an induced local dipole moment density. There is no need for higher moments or further refinement at the macroscopic scale. (Of course, if we want to predict theoretically a numerical value of the polarization induced by a given field, we'll need to do those difficult density functional theory calculations.  In this course, we'll just accept the experimentally determined value of the response.) It should perhaps be mentioned that in, say, a single crystal solid, the bound electrons might respond more easily in some particular direction of the applied field,  in which case the polarization $\stackrel{\to }{P}$ will in general not be parallel to the applied field $\stackrel{\to }{E}.$ We’ll see examples next semester in optical phenomena, for now we’ll assume the response is isotropic. Incidentally, Zangwill points out (following Purcell) that just looking at a local charge pattern is not enough to figure out the local polarization, or dipole density. Consider for example a two-dimensional sodium chloride crystal, think of it as a checker board. Take one sodium ion (positive) and one of its four neighbor chlorine ions, they form a dipole. You can now cover the board with parallel dipoles, so it looks polarized. But we could have chosen any of the neighboring ions, we could equally see it polarized the opposite way! The point is, to find the direction of polarization, we need to take a finite piece of material, and see what's going on at the surfaces.  To remove ambiguity, we need to crank up the external field from zero. Even if, looking at only the interior, the polarization is not uniquely defined, the change in polarization is, so starting from zero provides a definite answer. At the same time,  the electrons will pile up to make some surfaces negatively charged, other sides will be depleted of electrons, and therefore positive. ## A Polarized Dielectric Sphere with Zero External Field To get some feeling for dielectric polarization, we'll begin with a toy example: a uniformly polarized sphere, no external applied field.  (This isn't entirely fantasy: there are materials called ferroelectrics, which do have inbuilt polarization. However, exposed to the atmosphere they tend to attract loose ions to neutralize the unbalanced surface charge.) We'll represent this uniformly polarized sphere by taking a sphere of positive charge, radius $a,$ charge density $+\rho ,$ centered at the origin, superposed on an exactly similar sphere of negative charge, density $-\rho ,$ centered at a small displacement $-\stackrel{\to }{\delta }$ (so the dipole moment vector is in direction $+\stackrel{\to }{\delta }.$ ) From Gauss' theorem, the (spherically symmetric) electric field strength from the positively charged sphere is given by $4\pi {r}^{2}E=\left(4/3\right)\pi {r}^{3}\rho /{\epsilon }_{0}$, that is, inside the sphere the (radial) electric field and outside it is $\stackrel{\to }{E}\left(\stackrel{\to }{r}\right)=\frac{1}{4\pi {\epsilon }_{0}}\cdot \frac{4}{3}\pi {a}^{3}\rho \cdot \frac{\stackrel{^}{\stackrel{\to }{r}}}{{r}^{2}}=\frac{{a}^{3}\rho }{3{\epsilon }_{0}}\cdot \frac{\stackrel{^}{\stackrel{\to }{r}}}{{r}^{2}},$ corresponding to an (outside) potential $\phi \left(\stackrel{\to }{r}\right)=\frac{{a}^{3}\rho }{3{\epsilon }_{0}}\cdot \frac{1}{r}.$ Creating the uniformly polarized sphere by putting together the fields from the positive and the negative spheres, we find the inside field of the polarized dielectric sphere is uniform, $\stackrel{\to }{E}\left(\stackrel{\to }{r}\right)=\frac{\rho \left(\stackrel{\to }{r}-\left(\stackrel{\to }{r}+\stackrel{\to }{\delta }\right)\right)}{3{\epsilon }_{0}}=\frac{-\rho \stackrel{\to }{\delta }}{3{\epsilon }_{0}},\text{ }r and outside the potential is $\phi \left(\stackrel{\to }{r}\right)=\frac{{a}^{3}\rho }{3{\epsilon }_{0}}\stackrel{\to }{\delta }\cdot \stackrel{\to }{\nabla }\frac{1}{r}.$ Remember the polarization is defined as the dipole moment density, so it is $\stackrel{\to }{P}=\rho \stackrel{\to }{\delta }.$ Hence the electric field inside the sphere is $\stackrel{\to }{E}\left(\stackrel{\to }{r}\right)=\frac{-\stackrel{\to }{P}}{3{\epsilon }_{0}},\text{ }r and outside the potential is that of a dipole, moment $\stackrel{\to }{\mu },$ $\phi \left(\stackrel{\to }{r}\right)=\frac{1}{4\pi {\epsilon }_{0}}\cdot \frac{\stackrel{\to }{\mu }\cdot \stackrel{^}{\stackrel{\to }{r}}}{{r}^{2}},\text{ }\stackrel{\to }{\mu }=\frac{4}{3}\pi {a}^{3}\stackrel{\to }{P},\text{ }r>a.$ Exercise:  from the two superposed spheres shown above, locate the unbalanced charge, and sketch how that generates these fields and potentials. ## Dielectric in a Field: Potential from Free Charges Plus Polarization Turning now to the general case of a dielectric in an external field, and using the expression we just found for the electrostatic potential from a dipole, the total electric potential, including possible free charge density (meaning ordinary mobile charges, as in a conductor, not charges bound in molecules) is (summing over the induced elementary dipoles $\stackrel{\to }{P}\left(\stackrel{\to }{{r}^{\prime }}\right){d}^{3}{r}^{\prime }$ ) Notice this integral is over all of space: the charge distribution ${\rho }_{\text{free}}\left({\stackrel{\to }{r}}^{\prime }\right)$ can be outside or inside the dielectric, and it is generating the initial electric field.  (For example, we could have a chunk of dielectric between two charged plates.)  The polarization contribution $\stackrel{\to }{P}\left({\stackrel{\to }{r}}^{\prime }\right)$ is of course entirely from the dielectric. Now, integrating by parts (assuming zero contribution from infinity) gives It is now evident that $-\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}\left(\stackrel{\to }{r}\right)$ is equivalent to a charge density.  (Notice the integral is over all of space:  this means it includes contributions from the rapid variation in polarization at the boundary of the dielectric.) But how can we picture this internal charge density apparently generated by a spatially varying polarization?  It's easy to see with a one-dimensional model:  Imagine a lattice of equally spaced positive charges, and a displaced lattice of negative charges. Now suppose that on moving to the right, the negative charges we see are more and more displaced, meaning increasing negative polarization. Then the average spacing between them is greater than that between the fixed positive charges, and is given by how rapidly their displacement from the local positive charge is increasing, that is, by $\partial {P}_{x}/\partial x$. Over this interval, the positive charge density is clearly greater than the negative charge density. Exercise:  Check this by drawing in little vectors for the local dipoles, they point backwards (so $\stackrel{\to }{P}$ is negative) for the above example, but increase in strength on moving to the right. Hence $-\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}\left(\stackrel{\to }{r}\right)$ is positive. Exercise: Now consider a single point charge in an infinite uniform dielectric medium.  How is the medium polarized?  What is the polarization charge density $-\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}\left(\stackrel{\to }{r}\right)$? Answer:  Yes, the medium is polarized, but since $\stackrel{\to }{P}\propto \stackrel{\to }{E},$ the polarization charge density, $-\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}\left(\stackrel{\to }{r}\right),$ is proportional to the divergence of the local field, so in fact zero except exactly on top of the single point charge. (Well, really fuzzed out over atomic distances.)  Of course, this is no longer true if the medium has a boundary. It follows that the first Maxwell equation with a dielectric medium present is $\stackrel{\to }{\nabla }\cdot \stackrel{\to }{E}=\frac{1}{{\epsilon }_{0}}\left({\rho }_{\text{free}}-\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}\right).$ But there's more: imagine a sphere of dielectric placed in a constant electric field. We'll go through a formal solution later, the result is it is uniformly polarized, so there is no $\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}$ term in the interior. However, there is such a term at the surface: think of Gauss' theorem for a small pillbox, one surface outside the dielectric (where there's no material, so no polarization), one inside. In the mathematical limit of a thin pillbox, there is a delta function contribution to $\stackrel{\to }{\nabla }\cdot \stackrel{\to }{P}$ equal to the normal component of the polarization. This is also easy to see from our discussion of a uniformly polarized sphere: there is no free charge in that problem, the field is generated purely by the polarization$—$but not by the uniform internal polarization, only by the abruptly changing polarization at the surface. ## Electric Displacement Maxwell introduced the electric displacement $\stackrel{\to }{D}$ by $\stackrel{\to }{D}={\epsilon }_{0}\stackrel{\to }{E}+\stackrel{\to }{P}$ from which $\stackrel{\to }{\nabla }\cdot \stackrel{\to }{D}={\rho }_{\text{free}}.$ This field $\stackrel{\to }{D}$ is called the electric displacement because the $\stackrel{\to }{P}$ obviously arises from displacing charges, and Maxwell felt that the vacuum, thought of in those days as a medium itself and called the aether, had similar structure to a dielectric, and somehow charges were being displaced there too.  Of course, this turned out not to be the case, but the name stuck.  The ${\epsilon }_{0}$ makes the point that $\stackrel{\to }{D}$ is dimensionally different from $\stackrel{\to }{E}$, (well, in these SI units) even in a vacuum. Why are we still bothering with this field $\stackrel{\to }{D}$?  It turns out to be useful: notice its divergence only comes from the free charges.  This sounds like a great idea$—$this is the "real" electric field from the real charges, right?  But not so fast$—$it's not conservative:  unlike $\stackrel{\to }{E},$ its curl isn't identically zero. $\stackrel{\to }{\nabla }\cdot \stackrel{\to }{D}={\rho }_{\text{free}},\text{ }\stackrel{\to }{\nabla }×\stackrel{\to }{D}=\stackrel{\to }{\nabla }×\stackrel{\to }{P}.$ Therefore, from Helmholtz' theorem, $\begin{array}{l}\stackrel{\to }{D}\left(\stackrel{\to }{r}\right)=-\stackrel{\to }{\nabla }\int {d}^{3}{r}^{\prime }\frac{{\rho }_{\text{free}}\left({\stackrel{\to }{r}}^{\prime }\right)}{4\pi \left|\stackrel{\to }{r}-{\stackrel{\to }{r}}^{\prime }\right|}+\stackrel{\to }{\nabla }×\int {d}^{3}{r}^{\prime }\frac{{\stackrel{\to }{\nabla }}^{\prime }×\stackrel{\to }{P}\left({\stackrel{\to }{r}}^{\prime }\right)}{4\pi \left|\stackrel{\to }{r}-{\stackrel{\to }{r}}^{\prime }\right|}\\ \end{array}$ Notice that if we have a uniformly polarized object, so $\stackrel{\to }{\nabla }×\stackrel{\to }{P}=0$ in its interior, there will still be a contribution to $\stackrel{\to }{D}$ from that second term at the boundaries. Exercise:  Sketch this contribution for a uniformly polarized sphere. (Quite difficult, but instructive.)
2023-03-26 11:21:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 55, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605599403381348, "perplexity": 809.1648048245711}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00355.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/199/2/90183/the-norms-and-singular-numbers-of-polynomials-of-the-classical-volterra-operator-in-l-2-0-1
Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty The norms and singular numbers of polynomials of the classical Volterra operator in $L_2(0,1)$ Tom 199 / 2010 Studia Mathematica 199 (2010), 171-184 MSC: 47A10, 47A35, 47G10. DOI: 10.4064/sm199-2-3 Streszczenie The spectral problem $(s^2I-\phi(V)^{*}\phi(V))f=0$ for an arbitrary complex polynomial $\phi$ of the classical Volterra operator $V$ in $L_2(0,1)$ is considered. An equivalent boundary value problem for a differential equation of order $2n$, $n=\deg(\phi)$, is constructed. In the case $\phi(z)=1+az$ the singular numbers are explicitly described in terms of roots of a transcendental equation, their localization and asymptotic behavior is investigated, and an explicit formula for the $\|{I+aV}\|_2$ is given. For all $a\neq 0$ this norm turns out to be greater than 1. Autorzy • Yuri LyubichTechnion Haifa 32000, Israel e-mail • Dashdondog TsedenbayarDepartment of Mathematics Mongolian University of Science and Technology P.O. Box 46/520 Ulaanbaatar, Mongolia e-mail Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
2021-05-09 05:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31797167658805847, "perplexity": 2233.8590161351167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00028.warc.gz"}
https://www.physicsforums.com/threads/electric-field-over-a-charged-cylinder-using-only-coulombs-law.717742/
# Electric Field over a charged cylinder using only Coulomb's Law ## Homework Statement What is the electric field at a point on the central axis of a solid, uniformly charged cylinder of radis R and length h? ## Homework Equations Well, I've set up the triple integral and have gotten to this point: $\bar{E}_{z}=\frac{2\pi\rho}{4\pi\epsilon_{0}}\int^{h/2}_{-h/2}[1-\frac{z'-z}{\sqrt{(z'-z)^{2}+R^{2}}}]dz$ , z' is the location of the test point When I integrate this I get a constant term and a mess of logarithms. I know that the field should be 0 when z' = 0, but it doesn't check out. What's wrong? TSny Homework Helper Gold Member Well, I've set up the triple integral and have gotten to this point: $\bar{E}_{z}=\frac{2\pi\rho}{4\pi\epsilon_{0}}\int^{h/2}_{-h/2}[1-\frac{z'-z}{\sqrt{(z'-z)^{2}+R^{2}}}]dz$ Re-evaluate how you got the 1 inside the brackets in the expression above. I suspect that you got this from assuming that ##\frac{z'-z}{\sqrt{(z'-z)^2}} = 1##. But, recall that ##\sqrt{x^2} = |x|##. Alright. Now that I've changed that term, it vanishes when integrated, but the second term in the integrand still turns into $\frac{1}{2}ln| (z'-z) + \sqrt{(z'-z)^{2}+R^{2}} |^{h/2}_{-h/2}$ Which, as far as I can tell, can't be reduced to 0 when z' = 0 Nevermind, integrating this absolute value function is a bit more work than I thought it was. TSny Homework Helper Gold Member Also, I don't see how you are getting a logarithm expression in the result of the integration.
2021-09-18 08:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072709679603577, "perplexity": 452.0197185088273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00432.warc.gz"}
https://bonohu.github.io/removing-version-information-in-ids.html
## Removing version information in IDs Identifiers (IDs) in public databases often contain version information. For example, .16 in ENSG00000100644.16 from Ensembl and .1 in NM_001243084.1 from RefSeq. Such version information can be an obstacle to join entries from different databases. So, version information should be trimmed before joining. The file that contains such IDs with version information id.txt can be converted by tiny Perl script like % perl -i~ -pe 's/([^\.]+)\./\$1/g' id.txt Original file will be renamed to id.txt~, and converted file is now named as id.txt.
2021-09-21 02:47:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3708350956439972, "perplexity": 14435.102724315533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00584.warc.gz"}
https://robotics.stackexchange.com/questions/1435/laser-scanner-distance/1436
# laser scanner distance I'm looking at laser scanners and I see a huge range of detection distances. The furthest I've see are 30m if you exclude the very expensive ones that claim up to 150m. My question is what is the reason for the huge difference in range/price. I would think that with a laser it wouldn't be that difficult to detect something at distances greater than 30m. What's the physical limitation that makes it so much more expensive to go above 30m for a laser scanner or is there one? It may be worthwhile to consider how laser scanners work. We know that it is possible to send a beam of light at an object, and detect how long it takes to be reflected back to the sensor to measure its distance. First of all, we use lasers because reflection of the light from the object is so important. lasers keep the light concentrated in a narrow beam, with minimal refraction. We'll come back to that later. There are several ways to measure distance. The first is triangulation. This typically depends on good placement of optics, and the CCD sensor the return beam shines onto. You can easily see the problem with a large distance - the angle detected gets increasingly close to $0^\circ$, so that we need sensor components which are very accurate at small scales. We would also like a beam width which is narrow for accurate detection of where the beam is, but there is greater diffraction at narrow beam widths, meaning the return beam of light is not that narrow. Over long distances, the beam gets increasingly wider, because of the small amount of diffraction. The second way is to measure the round-trip time. Light travels very fast at $3\times 10^8 \textrm{m/s}$. This means that even detecting something at $150 \textrm{m}$ will take $1 \textrm{μs}$. Embedded computers typically have a clock rate of only several megahertz, so even at this large distance, a system would struggle to provide an accuracy of 10%, and that is assuming the data acquisition is as fast - often the peripheral features have a slower clock rate. The data acquisition is typically much slower, allowing cheaper systems to be built. As to how they can get away with this - an interesting phenomenon is used. If a sinusoidal signal is embedded into the light (via the intensity), the return signal will also be a sinusoidal signal. However, depending on the distance, the signal will have a phase shift. That is, at $0 \textrm m$, there is no phase shift, and if the wavelength is $20 \textrm m$, then an object at $5 \textrm m$ will mean that the light travels $10 \textrm m$, creating a phase shift of $180^\circ$. If the signals are normalized to the same amplitude, taking the difference between the outbound signal, and the return signal, we get another analog sinusoidal signal (note: this is only one method). The amplitude of this depends on the phase shift. At $180^\circ$, we get the maximum amplitude - double of the original outbound signal. We can easily detect the amplitude of this signal even with slow digital circuitry. The main problem is that the choice of wavelength limits the maximum range. For example, the wavelength of $20 \textrm m$ limits the detection range to only $5\textrm m$. If we simply chose a much longer wavelength, we would get the same percentage accuracy. In many cases, if we want greater absolute accuracy, the system will have a greater cost, since we must more accurately measure amplitude in the face of environmental noise. This may involve a number of changes - for example, a larger laser beam, greater light intensity, and more accurate electronic components, more shielding of noise, and more power to run the system. If any one of these is not good enough, accuracy is affected. The other problem which affects all methods is the intensity of the return beam. As the distance increases, the intensity decreases. There is not that much scatter of light in air, so attenuation is not so much caused by this. However, even though a laser light is used, there is a very small amount of diffraction. It is not possible to remove this diffraction completely, although it can be decreased with a wider beam width. Diffraction reduces the amount of the returning light beam incident on the sensor. Because the beam width gradually increases, the sensor only receives a small proportion of the returning light. This affects the accuracy of measurements. Using a larger sensor can also increase noise. Therefore, here we also see another limit on the distance. We can increase the distance by increasing light intensity, or increasing beam width. However, increasing light intensity increases power consumption and heat, while increasing beam width requires larger optic lenses, and larger distances between optic components. Essentially, there is a balance between cost, accuracy, power consumption, and size of the system, and longer distances often requires better components to maintain accuracy. • thanks for such a detailed response! That answered all my questions. The sinusoidal signal was new to me. That's a really neat way to solve that problem. – JDD Jun 14 '13 at 23:59
2021-07-25 19:42:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7526825070381165, "perplexity": 386.647065416259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00263.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=APC&langue=fr&action_todo=view&id=in2p3-00319078&version=1
158 articles – 1990 Notices  [english version] HAL : in2p3-00319078, version 1 arXiv : 0804.2141 GRB080319B reached 5th optical magnitude during the burst. Thanks to the VLT/UVES rapid response mode, we observed its afterglow just 8m:30s after the GRB onset when the magnitude was R ~ 12. This allowed us to obtain the best signal-to-noise, high resolution spectrum of a GRB afterglow ever (S/N per resolution element ~ 50). The spectrum is rich of absorption features belonging to the main system at z=0.937, divided in at least six components spanning a total velocity range of 100 km/s. The VLT/UVES observations caught the absorbing gas in a highly excited state, producing the strongest Fe II fine structure lines ever observed in a GRB. A few hours later the optical depth of these lines was reduced by a factor of 4-20, and the optical/UV flux by a factor of ~ 60. This proves that the excitation of the observed fine structure lines is due to ''pumping'' by the GRB UV photons. A comparison of the observed ratio between the number of photons absorbed by the excited state and those in the Fe II ground state suggests that the six absorbers are $\gs18-34$ kpc from the GRB site, with component I ~ 2 times closer to the GRB site than components III to VI. Component I is characterized also by the lack of Mg I absorption, unlike all other components. This may be due to a higher gas temperature, suggesting a structured ISM in this galaxy complex.
2014-07-24 15:39:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4700491428375244, "perplexity": 1591.6403620496078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889379.27/warc/CC-MAIN-20140722025809-00127-ip-10-33-131-23.ec2.internal.warc.gz"}
http://integrate.mutz.science/exercises/E101/E101.html
# P2-E001¶ You are encouraged to familiarise yourself with a few topics prior to the exercise. The background section will provide you with some basic information and pointers. The exercise itself is designed to give you some practice in working with these concepts to give you a more intuitive grasp of them. The teacher will be able to supervise your during the exercise and explain the underlying theory if uncertainties remained during your preparation for the exercise. ## Background¶ The following are topics you should try to familiarise yourself with prior to the exercise. Note that the serve as pointers and checks for your studies rather than as full instructional materials. Topic 1: First, (re-)familiarise yourself with the normal distribution. Are you able to make sense of the figure, equation and labels below? Topic 2: Next, you need to know about z-scoes. They are a standardised distance from a specific point to the mean, i.e. the distance expressed in standard deviations. In other words, the signal (distance) is put in relation to dispersion. $$\sigma$$ is the standard deviation, $$\mu$$ is the mean. Topic 3: Familiarise yourself with central limit theorem and its implications. Consider the following scenario. Variable x is distributed as shown below. Its mean ($$\mu$$ ) is 3. You take n (5) random samples (S) from a population x. For each of the samples, you can calculate the sample means. Note that we have a mean of 2.5 twice. One time for sample 1 and one time for sample 2. If you plotted the frequency of the sample means, it would look like this: If you increased the number of samples you take from the original set, you would see observe that the frequency is highest where $$\mu$$ is located. The mean of all sample means ($$\mu_{\bar{x}}$$ ) would shift closer and closer towards the population mean ($$\mu$$ ). Note also that the distribution of sample means is beginning to resemble a normal distribution in disregard of the original distribution of x. There is another very important effect of increasing the sample size. The dispersion of the sampling distribution decreases. We can thus summarise: • As sample size increases, the sample means of a random variable approaches a normal distribution even when the original variables are not normally distributed. • $$\mu \approx \mu_{\bar{x}}$$ • The dispersion of the sampling distribution ($$\sigma_{\bar{x}}$$ ) decreases as sample size increases. It can be computed as $$\sigma_{\bar{x}} = \frac {\sigma}{\sqrt {n}}$$ . Since $$\sigma$$ is usually not known, the sample standard deviation s is often used as an approximation for $$\sigma$$ , giving us an approximation for the dispersion of the sampling distribution ($$\hat{\sigma}_{\bar{x}}$$ ). Topic 4: Finally, familiarise yourself with the concepts of standard error, confidence intervals, the t-distribution and how it is related to the normal distribution. ## Exercise¶ ### Information¶ Learning goals Skills • calculating common statistics • getting a more intuitive grasp on the concepts: central limit, standard error, confidence intervals What to Submit Submit your script(s) and 1-2 sentence answers to the questions for the tasks of this exercise. You may include the answers as comments in your script(s). Make sure to also comment each of your calculations (using the terminology you were introduced to). The script should be named [your surname]_e101.[ext], where [ext] is the file extension. In case of Python, this would be .py, for Matlab it is .m, etc. ### Temperatures in the Atacama¶ Weather station in Pan de Azucar National Park. [photo: cc-by Kirstin Übernickel] The Atacama desert is the driest place on Earth, and it can also get a little warm. Through careful experimentation, namely GCM (General Circulation Model) simulations, you were able to construct two models in form of PDF’s (probability density functions) for summer temperatures in this desert. The simulations represent two very possible regional climate scenarios. Model 1 has a mean of 24.2°C and standard deviation of 6.1°C. Model 2 has a mean of 25.9°C and standard deviation of 5.5°C. A study suggested that mean summer temperatures above 23°C are related to increased health risks in the local population. The budget of the local goverment is large enough to keep the population healthy if the mean summer temperatures for the 30 year planning period do not exceed 24.5°C. What is the probability of the local government running out of money within the given period and for each of the two regional climate scenarios? ### Sampling Strategy - Reversing the Problem¶ You’re planning a field campaign to the Alps to collect samples of carbonates to reconstruct mean annual temperatures (MAT) for the Middle Miocene, and through some groundbreaking research, those carbonates are now able to adequately represent palaeo-MAT. You have limited time and resources, but still want to end up with a large enough sample size to be able to conduct decent research. More specifically, you want to have a sample number that allows you to be 95% confident that your true Middle Miocene MAT lies within 0.5°C of your sample mean. Temperatures constructed from similar samples of a pilot study have a standard deviation of 1.1°C. How many samples are you planning to take? Warning Late submissions won’t be accepted!
2021-09-26 21:22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6875230669975281, "perplexity": 749.7134177301829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00331.warc.gz"}
http://mathhelpforum.com/algebra/98997-should-simple.html
1. ## Should be simple... ...but I'm afraid I can't recall how to answer such questions: 'After taking 3 math quizzes Elise has an average of 89. What must she score on the fourth quiz to raise her average to 91?' Help is very much appreciated. 2. Let her scores in the first three quizzes be $x1 , x2 , x3$ respectively. Given that $(x1+x2+x3)/3 = 89$ so $x1+x2+x3 = 267$ Now her average in four quizzes would be $(x1+x2+x3+x4)/4$ x4 is her scr in the 4th quiz. since $(x1+x2+x3+x4)/4=91$ $x1+x2+x3+x4=91(4)=364$ so $x4=364-(x1+x2+x3)$ Plug in the value of $x1+x2+x3$ from above and find x4 $x4=364-267=97$ 3. Thanks very much!
2013-05-24 08:16:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7097529768943787, "perplexity": 2465.3647873039135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704368465/warc/CC-MAIN-20130516113928-00066-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.deepfriedbrainproject.com/2010/07/pert-formula.html
# The Magical PERT Formula 4 minute read    Updated:    Harwinder Singh PERT Formula (Part 4): We saw the formulas for PERT estimate, variance and standard deviation of an activity duration in study guides, and accepted them like Newton’s three laws of motion. Did we try to figure out how those formulas were derived? Do we know the underlying assumptions? Is there a magic in these formulas that allows them to predict the outcome of something as unique as a project? Can these formulas predict the project outcomes accurately? Are these formulas actually being applied in the real world? We’ll find some answers in this article and remaining in the upcoming ones. But please be forewarned - if you are math-averse, this stuff may divide your head by zero! #### Assumptions in PERT In Say Hello to PERT, we learned that PERT estimate is a weighted average (mean) of the 3-point estimates. In order to get the mean of the 3-point estimates, PERT makes several assumptions and approximations. The first assumption is that an activity’s completion time is a random variable, with clearly defined end points i.e. the completion time lies within a finite range. The minimum value of the range is the Optimistic (O) estimate and the maximum value is the Pessimistic (P) estimate. In other words, there’s no probability of the activity duration being less than O or more than P. #### Beta Distribution This assumption fits the definition of a Beta distribution. A beta distribution is a continuous probability distribution curve within a finite range. The peak of the curve is the mode (M, Most Likely value) of the distribution. A beta distribution is determined by 4 parameters - a minimum value, a maximum value and two shape parameters. • a - Min value • b - Max value • α - Shape parameter • β - Shape parameter The two shape parameters α and β determine the shape of the beta distribution curve on the left and right of the mode (peak of the curve) respectively. We get a symmetric curve when the two shape parameters are equal. Refer to the following image (source: Wikipedia) that shows 5 different samples of beta distribution between the range of 0 and 1 (a = 0 and b = 1). Note: The case where a = 0 and b = 1 is known as the standard beta distribution. #### Derivation of formulas for Mean, Variance and Standard Deviation The mode, mean and variance of the beta distribution can be determined by the following 3 equations, where σ is the standard deviation: Let me clarify that we have switched the variables above. O has been replaced with a, the minimum value, P with b, the maximum value, and M with m, the mode. As it turns out, the magical PERT formula is only true when the following assumptions are applied: Equation (4) means that the curve is skewed toward the right, and (5) means it’s skewed toward the left. Substituting the value of α and β from either (4) or (5), we can solve equations (1), (2) and (3) to get the values of mean, variance and standard deviation of the distribution. So, there we have it - the magical formulas of PERT and standard deviation. Two interesting observations from these formulas are: 1. PERT gives 4 times more weightage to the mode or the Most Likely estimate. 2. The difference between the maximum (Pessimistic) and the minimum (Optimistic) values is the uncertainty in the estimate, and is equal to six standard deviations, i.e. Uncertainty = P - O = 6σ => σ = (P - O) / 6 As noted above, these formulas are accurate only for specific values of α and β. These values are rarely true in practice. In follow-up articles, I’ll discuss the problems with the assumptions and approximations, and the limitations of PERT in detail. Stay with me. We are just warming up. 8-part series on Project Estimation and PERT Image credit: Flickr / acidwashphotography | Wikimedia Commons Tags: Simple and Understandable Sir. Thanks for the feedback, Sreekanth. Now I know that at least one person read it :) Thank you for all your posts (yes I've been reading them as well). I've been teaching PM for 2 years now and this is the first time I've been able to grasp PERT and CPM together. You're doing such a fabulous job that I told me students to come to your blog. Thanks again. That's a great compliment, Fawzia. Thank you so much. Usually, studying math divides my brain by 0, but this blog divided my brain by 1 and retained all the information. Thanks keep doing the great work. Glad to hear that, Saqlain :) Thank you!
2022-05-18 06:21:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223500847816467, "perplexity": 779.1564884355021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00515.warc.gz"}
https://cran.dcc.uchile.cl/web/packages/vmr/vignettes/O4-vmrManageBoxes.html
# The vmr boxes ## Presentation A box is a Vagrant environment (bundle) containing a virtual environment (such as a virtual machine) for a specific provider (such as VirtualBox). Boxes are available for development and test, do not use it for production. Be careful boxes can be large in space, be sure to have enough hard drive space. ## List boxes Official vmr boxes list is available here : https://app.vagrantup.com/VMR/ vmr boxes are identify by : • a name: -R • a version: . • a provider: provider name (default virtualbox) • a description: for information To get this list in R console: boxes_list <- vmrList() boxes_list To get information about a specific box: vmrListBox(boxes_list\$Name[1]) vmrBoxDownload(vmr_env) ## Manage boxes # List downloaded boxes vmrLocalBoxUpdate()
2021-09-24 21:32:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46606746315956116, "perplexity": 13688.5248021349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00637.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3932632
## Difference between classical wave function and quantum wave function We have the wave equation in classical mechanics in one dimension in the following way $\frac{\partial^2 \psi}{\partial x^2}=c^2\frac{\partial^2 \psi}{\partial t^2}$ on the other hand we have the Schrodinger equation in quantum mechanics in one dimension in the following way $i\hbar\frac{\partial\psi}{\partial t}=\mathbf{H}\psi$ both are called wave functions. Which are the difference between them? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus The classical wave equation describes the behaviour of waves, whereas the schrodinger equation describes the behaviour of a quantum particle, A good descriptioon of it can be found here http://www.youtube.com/watch?v=G3NgO...6&feature=plcp Recognitions: Science Advisor The difference between classical an quantum mechanics is not the wave function itself but the interpretation, e.g. the probabilistic interpretation "to find a particle somewhere" plus the "collapse of the wave function" which is absent in classical theories and which is not subject to the Schrödinger equation i.e. which cannot be described by the linear dynamics of the quantum world (there are other interpretations as well, but I think none of themsolves the "measurement problem"). Recognitions: Gold Member ## Difference between classical wave function and quantum wave function In addition to the probabilistic and particlelike elements of the quantum wave function, we also have the mathematical difference that the QM wave function uses a -i d/dt not a d2/dt2. For wave functions that correspond to particles of definite energy and wave modes of definite frequency, the two work the same (if we deal with complex amplitudes appropriately, according to whichever interpretation we are using), but in the general case (particles with no definite energy, wave modes that superimpose frequencies), these are mathematically different. For one thing, a second-order in time equation requires two initial conditions, often the wave function and its first time derivative at t=0, whereas a first-order in time equation requires only one (often the wave function at t=0). That's a pretty important difference-- classically, we need to know the current state and how it is changing (analogous to position and velocity of the particles), but quantum mechanically, we need only know the current state, and how it is changing in time is prescribed automatically. Now, since we view quantum mechanics as the more fundamental theory, we would tend to regard it as "closer to the truth" in some sense, so we would tend to think that the universe does not need to be told how it is changing, only what its state is. So then we can ask, not why is QM a d/dt, but rather why is CM a d2/dt2? Why did we think we had to be told the particle velocities and positions, when in fact it is inconsistent to know both at the same time? Apparently it is because our measurements were so imprecise, we could better "pin down the state" by including information about both position and momentum. More precise measurements actually mess with the system in ways that we tend to want to avoid, so we found it is better to combine imprecise measurements and we can still make good predictions in the classical limit without strongly affecting the system. Quote by Ken G So then we can ask, not why is QM a d/dt, but rather why is CM a d2/dt2?. This reminds me a crazy thought that always has been in my mind. Why almost all, or at least a big number of differential equations in physics are of second order? It is very weird to find a differential equation of third order in physics. I believe that nature wants to tell us something whith that. For me this is even mysterious. What do you think about that? Quote by tom.stoer The difference between classical an quantum mechanics is not the wave function itself but the interpretation, e.g. the probabilistic interpretation "to find a particle somewhere" plus the "collapse of the wave function" which is absent in classical theories and which is not subject to the Schrödinger equation i.e. which cannot be described by the linear dynamics of the quantum world (there are other interpretations as well, but I think none of themsolves the "measurement problem"). Anyone can give a different interpretation. I find this paper very illustrative: http://www.imamu.edu.sa/Scientific_s...rpretation.pdf Going back to the main subject. Is it only the name of those functions the only thing that they have in common? Or they have some mathematical or physical properties in common? Blog Entries: 1 Quote by Casco It is very weird to find a differential equation of third order in physics. I Yes, it is weird to find a third order differential equation in physics, and when you do see such a beast, be on the lookout for something really weird. See the Abraham-Lorentz equation. Recognitions: Gold Member @ Ken G: You wrote: " For one thing, a second-order in time equation requires two initial conditions, often the wave function and its first time derivative at t=0, whereas a first-order in time equation requires only one (often the wave function at t=0). That's a pretty important difference-- classically, we need to know the current state and how it is changing (analogous to position and velocity of the particles), but quantum mechanically, we need only know the current state, and how it is changing in time is prescribed automatically." This is amazing! Suppose I describe a simple symmetric pulse, perhaps a Sine Gaussian wave packet. But I describe it three times at exactly the same location. Only the first time it is moving left, the second time it is moving right and the third time it is a half and half superposition of the first two. If I have no velocity information, how can I tell the difference? TIA. Jim Graber Recognitions: Gold Member Quote by Casco This reminds me a crazy thought that always has been in my mind. Why almost all, or at least a big number of differential equations in physics are of second order? It is very weird to find a differential equation of third order in physics. I believe that nature wants to tell us something whith that. For me this is even mysterious. What do you think about that? I think it's a curious point, for sure, but I'm not sure it comes from nature, I think it comes from us. We decide what kinds of questions we want answers to, and we are always seeking simplicity. So we start with equations that have no time dependence, but we don't get dynamics from them. So we bring in a first time derivative, and a wide array of phenomena open up to us, but still much remains hidden. So we bring in a second derivative, and a host of new phenomena become understandable to us, and those that don't we can just label "too complicated" and move on. I think we make a mistake if we think nature herself is determined by simple laws, instead I think that we lay simple laws next to nature, like templates, and amaze ourselves by how well she fits them, but they are still coming from us. Then, sometimes we figure out a way to reduce a time derivative, as in going from Newton's laws to the Schroedinger equation. But it comes at the cost of a lot of other complexities, like probability amplitudes and so on. These are necessary to get agreement with experiment, and we do amazingly well-- it's an odd case of the equations getting more accurate when they in some sense got simpler. But we had to change the kinds of questions we want answers to (probabilities rather than definite outcomes). So I think that's what is really going on-- as long as we are flexible in what we want to know, we can keep the time derivatives surprisingly low in our equations. I don't know why it works at all. Recognitions: Gold Member Quote by Casco Going back to the main subject. Is it only the name of those functions the only thing that they have in common? Or they have some mathematical or physical properties in common? The key physical property they have in common is the importance of interference. A "wave equation" describes superimposable waves, so each wave can be solved independently, but the ultimate behavior involves superimposing them, which allows them to interfere with each other. So a wave that can get somewhere in two different ways can arrive with zero amplitude, which is less than either wave would have by itself. I think that is the fundamental similarity of these "waves," along with being unlocalized. The "wave" concept is used even more generally, like with solitons in nonlinear systems, but in most cases one is talking about a linear superimposable signal when one talks about "waves." Recognitions: Gold Member Quote by jimgraber Suppose I describe a simple symmetric pulse, perhaps a Sine Gaussian wave packet. But I describe it three times at exactly the same location. Only the first time it is moving left, the second time it is moving right and the third time it is a half and half superposition of the first two. If I have no velocity information, how can I tell the difference? You can tell because the pulses you are desribing are not real-valued, they are complex-valued. That means it is not just their magnitude that changes over the pulse, but also the phase. Phase information is how you encode the direction of propagation. Granted, this is something of a cheat-- we still need two pieces of information (magnitude and phase), rather than position and velocity. But all the same, there is no need to know how anything is changing with time, the initial condition can be just a snapshot (if you could "see complex"!). Recognitions: Gold Member So the second pulse would be the complex conjugate of the first pulse? I think the most confusing thing about QM, and the biggest difference from CM is not probabilistic versus deterministic, and not continuous versus discrete, but rather all those extra dimensions, complex and in the Hilbert space. TIA Jim Graber Recognitions: Gold Member Quote by jimgraber So the second pulse would be the complex conjugate of the first pulse? Exactly. I think the most confusing thing about QM, and the biggest difference from CM is not probabilistic versus deterministic, and not continuous versus discrete, but rather all those extra dimensions, complex and in the Hilbert space. I agree with you, though I think I would have said it that the connection between those things is what is the most confusing element of quantum mechanics.
2013-05-19 15:56:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6187466382980347, "perplexity": 404.84711647771474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697772439/warc/CC-MAIN-20130516094932-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
https://scirate.com/arxiv/math.PR
# Probability (math.PR) • We consider a problem introduced by Mossel and Ross [Shotgun assembly of labeled graphs, arXiv:1504.07682]. Suppose a random $n\times n$ jigsaw puzzle is constructed by independently and uniformly choosing the shape of each "jig" from $q$ possibilities. We are given the shuffled pieces. Then, depending on $q$, what is the probability that we can reassemble the puzzle uniquely? We say that two solutions of a puzzle are similar if they only differ by permutation of duplicate pieces, and rotation of rotationally symmetric pieces. In this paper, we show that, with high probability, such a puzzle has at least two non-similar solutions when $2\leq q \leq \frac{2}{\sqrt{e}}n$, all solutions are similar when $q\geq (2+\varepsilon)n$, and the solution is unique when $q=\omega(n)$. • We prove a result on non-clustering of particles in a two-dimensional Coulomb plasma, which holds provided that the inverse temperature $\beta$ satisfies $\beta>1$. As a consequence we obtain a result on crystallization as $\beta\to\infty$: the particles will, on a microscopic scale, appear at a certain distance from each other. The estimation of this distance is connected to Abrikosov's conjecture that the particles should freeze up according to a honeycomb lattice when $\beta\to\infty$. • In this paper we consider an interacting particle system in $\mathbb{R}^d$ modelled as a system of $N$ stochastic differential equations driven by Lévy processes. The limiting behaviour as the size $N$ grows to infinity is achieved as a law of large numbers for the empirical density process associated with the interacting particle system. We prove that the empirical process converges, uniformly in the space variable, to the solution of the $d$-dimensional fractal conservation law. • Consider a reaction-diffusion equation of the form $\dotu(t\,,x)=\tfrac12 u"(t\,,x) + b(u(t\,,x)) + \sigma(u(t,x)) \xi(t,x),$on $\R_+\times[0\,,1]$, with the Dirichlet boundary condition and a nice initial condition, where $\xi(t,x)$ is a space-time white noise, in the case that there exists $\varepsilon>0$ such that $| b(z)| \ge|z|(\log|z|)^{1+\varepsilon}$ for all sufficiently-large values of $|z|$. When $\sigma\equiv 0$, it is well known that such PDEs frequently have non-trivial stationary solutions. By contrast, Bonder and Groisman have recently shown that when $\sigma$ is constant and $\sigma \neq 0$, there is %the addition of any amount of space-time white noise to the reaction-diffusion equation often results in finite-time blowup. In this paper, we prove that the Bonder--Groisman condition is unimproveable by showing that the reaction-diffusion equation with noise is "typically" well posed when $| b(z)| =O(|z|\log_+|z|)$ as $|z|\to\infty$. We interpret the word "typically" in two essentially-different ways without altering the conclusions of our assertions • We introduce the class of continuous-time autoregressive moving-average (CARMA) processes in Hilbert spaces. As driving noises of these processes we consider Levy processes in Hilbert space. We provide the basic definitions, show relevant properties of these processes and establish the equivalents of CARMA processes on the real line. Finally, CARMA processes in Hilbert space are linked to the stochastic wave equation and functional autoregressive processes. • We consider noise perturbations of delay differential equations (DDE) experiencing Hopf bifurcation. The noise is assumed to be exponentially ergodic, i.e. transition density converges to stationary density exponentially fast uniformly in the initial condition. We show that, under an appropriate change of time scale, as the strength of the perturbations decreases to zero, the law of the critical eigenmodes converges to the law of a diffusion process (without delay). We prove the result only for scalar DDE. For vector-valued DDE without proofs see Phys.Rev.E v93, 062104. • Motivated by truncated EM method introduced by Mao (2015), a new explicit numerical method named modified truncated Euler-Maruyama method is developed in this paper. Strong convergence rates of the given numerical scheme to the exact solutions to stochastic differential equations are investigated under given conditions in this paper. Compared with truncated EM method, the given numerical simulation strongly converges to the exact solution at fixed time $T$ and over a time interval $[0,T]$ under weaker sufficient conditions. Meanwhile, the convergence rates are also obtained for both cases. Two examples are provided to support our conclusions. • In the work [Bull, Austr. Math. Soc. 85 (2012), 315-234], S.R. Moghadasi has shown how the decomposition of the $N$-fold product of Lebesgue measure on $\mathbb R^n$ implied by matrix polar decomposition can be used to derive the Blaschke-Petkantschin decomposition of measure formula from integral geometry. We use known formulas from random matrix theory to give a simplified derivation of the decomposition of Lebesgue product measure implied by matrix polar decomposition, applying too to the cases of complex and real quaternion entries, and we give corresponding generalisations of the Blaschke--Petkantschin formula. A number of applications to random matrix theory and integral geometry are given, including to the calculation of the moments of the volume content of the convex hull of $k \le N+1$ points in $\mathbb R^N$, $\mathbb C^N$ or $\mathbb H^N$ with a Gaussian or uniform distribution. • We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability $1-p$ and $p$, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in ${\mathbb Z}^2$. We prove that there exists $p_0$ such that for $p\le p_0$ such minimizers are characterized by a majority phase; i.e., they take identically the value $1$ or $-1$ except for small disconnected sets. A deterministic analogue is also proved. • A recent paper \citeKMMO introduced the stochastic U_q(A_n^(1)) vertex model. The stochastic S-matrix is related to the R-matrix of the quantum group U_q(A_n^(1)) by a gauge transformation. We will show that a certain function D^+_\mu intertwines with the transfer matrix and its space reversal. When interpreting the transfer matrix as the transition matrix of a discrete-time totally asymmetric particle system on the one-dimensional lattice Z, the function D^+_\mu becomes a Markov duality function D_\mu which only depends on q and the vertical spin parameters \mu_x. By considering degenerations in the spectral parameter, the duality results also hold on a finite lattice with closed boundary conditions, and for a continuous-time degeneration. This duality function had previously appeared in a multi-species ASEP(q,j) process. The proof here uses that the R-matrix intertwines with the co-product, but does not explicitly use the Yang-Baxter equation. It will also be shown that the stochastic U_q(A_n^(1)) is a multi-species version of a stochastic vertex model studied in \citeBP,CP. This will be done by generalizing the fusion process of \citeCP and showing that it matches the fusion of \citeKRL up to the gauge transformation. We also show, by direct computation, that the multi-species q-Hahn Boson process (which arises at a special value of the spectral parameter) also satisfies duality with respect to D_0, generalizing the single-species result of \citeC. • We consider a sparse linear regression model Y=X\beta^*+W where X has a Gaussian entries, W is the noise vector with mean zero Gaussian entries, and \beta^* is a binary vector with support size (sparsity) k. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error \min_\beta\|Y-X\beta\|_2, where the minimization is over all k-sparse binary vectors \beta. The approximation reveals interesting structural properties of the underlying regression problem. In particular, a) We establish that n^*=2k\log p/\log (2k/\sigma^2+1) is a phase transition point with the following "all-or-nothing" property. When n exceeds n^*, (2k)^-1\|\beta_2-\beta^*\|_0≈0, and when n is below n^*, (2k)^-1\|\beta_2-\beta^*\|_0≈1, where \beta_2 is the optimal solution achieving the smallest squared error. With this we prove that n^* is the asymptotic threshold for recovering \beta^* information theoretically. b) We compute the squared error for an intermediate problem \min_\beta\|Y-X\beta\|_2 where minimization is restricted to vectors \beta with \|\beta-\beta^*\|_0=2k \zeta, for \zeta∈[0,1]. We show that a lower bound part \Gamma(\zeta) of the estimate, which corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely n_\textinf,1=\sigma^2\log p, which is information theoretic bound for recovering \beta^* when k=1 and \sigma is large, then at n^* and finally at n_\textLASSO/CS. c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_\beta\|Y-X\beta\|_2 in the regime n<n_\textLASSO/CS. • Based on numerical simulation and local stability analysis we describe the structure of the phase space of the edge/triangle model of random graphs. We support simulation evidence with mathematical proof of continuity and discontinuity for many of the phase transitions. All but one of themany phase transitions in this model break some form of symmetry, and we use this model to explore how changes in symmetry are related to discontinuities at these transitions. Mile Gu Nov 20 2015 05:04 UTC Good question! There shouldn't be any contradiction with the correspondence principle. The reason here is that the quantum models are built to simulate the output behaviour of macroscopic, classical systems, and are not necessarily macroscopic themselves. When we compare quantum and classical comple ...(continued) hong Nov 20 2015 00:40 UTC Interesting results. But, just wondering, does it contradict to the correspondence principle? Richard Kueng Jul 28 2015 07:01 UTC fyi: our quantum implications are presented in Subsection 2.2 (pp 7-9). Zoltán Zimborás May 28 2014 04:42 UTC It's a bit funny to look at a formally verified proof of the CLT :), here it is online:
2017-01-18 10:05:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395357131958008, "perplexity": 435.9420957461901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.financial.co.ke/2022/08/26/how-to-save-by-going-green-investopedia/
Over 10 years we help companies reach their financial and branding goals. Maxbizz is a values-driven consulting agency dedicated. Contact +1-800-456-478-23 411 University St, Seattle maxbizz@mail.com How To Save by Going Green – Investopedia Rising energy prices—fueled at least in part by Russia’s invasion of Ukraine—have made life uncomfortable for many Americans. Though prices for crude oil, gasoline, diesel fuel, and heating oil are forecast to drop a bit in 2023, the costs of of natural gas and electricity are expected to rise slightly next year. And all these costs are significantly higher than what Americans were paying in 2020 and 2021. One bright spot on the horizon is the steady fall in gasoline prices from June’s high of $5 a gallon to$3.90 as Aug. 22, 2022. However, that is still 74 cents more than a year ago. As Russia shows no signs of reconsidering its war, and Ukraine holds firm and refuses to yield, the situation is unlikely to improve in the immediate future. This means that Americans will be paying significantly more to use their cars, heat or cool their houses, cook their meals, talk on the phone, watch TV or play video games, and use their power lawnmower or snowblower. This provides one more compelling reason to reduce our reliance on traditional energy resources and “go green.” The good news is that making this jump is now more accessible than ever, thanks to the Inflation Reduction Act, which at $370 billion is the largest investment in climate action in U.S. history. A core goal of the Inflation Reduction Act, landmark legislation signed into law in August 2022, is to incentivize people to “go green” by making the transition more affordable. Look beyond the headline provisions, such as the 15% minimum corporate tax, and you’ll find various tax credits and rebates designed to improve the environment and American bank balances, at least in the long run. The government wants to help the population with the up-front costs of making their homes and vehicles more energy efficient and believes that such measures can reduce carbon emissions by roughly 40% by 2030 while cutting energy bills by$500 to $1,000 per year. Let’s take a look at the various incentives that the Biden administration introduced via the Inflation Reduction Act. The Inflation Reduction Act brought some big changes to the electric vehicle (EV) tax credit, a federal incentive to encourage people to purchase EVs. Residents who meet the income requirements and buy an electric, plug-in hybrid, or hydrogen fuel cell vehicle are eligible to receive up to$7,500 from the government to help fund the expenditure—provided it costs under a certain amount and is assembled in North America with a battery that is built with minerals mined or recycled on the continent. Another interesting development is the possibility of discounting the total credit amount from the auto’s purchase price at the point of sale. The credit now extends to “clean” pre-owned autos, giving buyers the opportunity to save 30% of the sale price up to a maximum of $4,000 on used vehicles made anywhere in the world that are two or more years old, cost$25,000 or less, weigh under 14,000 pounds, and are purchased from a dealer. Hefty up-front costs are the main reason that many people don’t drive around in an electric car or truck. However, once you overcome them, the savings start pouring in, and the investment should pay itself off in no time. A 2020 study from Consumer Reports claimed that EVs can save the owner anywhere from $6,000 to$10,000 throughout the life of the EV car when compared to gasoline-powered cars, and that was before prices at gas pumps rose to record highs. Up until the end of 2021, there was a green-oriented household tax credit known as the “Nonbusiness Energy Property Credit.” The good news is that a new credit, called the “Energy Efficient Home Improvement Credit,” will soon take its place, and it is much, much better. Starting in 2023 homeowners can tap into a 30% tax credit to cover some of the costs of eligible home improvements, such as installing efficient exterior windows, skylights, exterior doors, boilers, and so on. The maximum payout depends on the item, tops out at $2,000, and resets every year until the bill expires in 2032. If you’re familiar with the Nonbusiness Energy Property Credit, you’ll recognize immediately that its coming successor is significantly more generous. Under current rules you can get a 10% credit up to a maximum of$500, which is a lifetime rather than annual limit. The new Energy Efficient Home Improvement Credit arrives in 2023 with an annual cap of $1,200, which applies to almost every type of qualifying improvement. However, there are alternative yearly dollar limits that apply to the following items: If you don’t have a tax liability you may not be able to take advantage of—and get refunded—the quoted tax-credit percentage of your purchase. Another credit that the Inflation Reduction Act extended to 2034 and revamped is the Residential Energy Efficient Property Credit. Now called the “Residential Clean Energy Credit,” it continues to offer help toward the installation cost of solar, wind, geothermal, and biomass renewable energy but on more generous terms. Previously, the credit was worth 26% of the outlay and scheduled to drop to 23% in 2023 before expiring in 2024. Now, under the Inflation Reduction Act, it will jump to 30% and stay there until 2032 before dropping to 26% in 2033, then 22% in its final year. The other good news is that, as of 2023, the new incentive also applies to battery storage technology with a capacity of at least three-kilowatt hours. The Inflation Reduction Act also established two rebate programs that are predominantly designed to help low- and middle-income families. One of them, the High-Efficiency Electric Home Rebate Program, provides rebates to households earning less than 150% of their local area’s median income who purchase energy-efficient electric appliances. Those who qualify can get rebates on the following items: Rebates are also available for other upgrades that don’t involve appliances, up to the following amounts: Households with income below 80% of the median where they reside can claim a rebate for the full cost of their upgrades, up to a maximum of$14,000. Those that fall between 80% and 150% of their area median income qualify for rebates covering half the costs, again up to $14,000. The rebates are meant to be delivered to consumers at the point of sale. The other rebate program, called “HOMES,” rewards homeowners who reduce their energy consumption by retrofitting their homes. Payout amounts depend on household income and the degree to which energy consumption is cut. A 20% energy reduction throughout the home would trigger a maximum rebate of$2,000 or half the cost of the retrofit project, whichever is less. That threshold then rises to \$4,000 for those able to cut energy consumption by more than 35%. For lower-income families—households earning 80% less than their local area’s median income—the rebate limits double. Going green generally means behaving in a way that protects the environment. That can include recycling, using your car less, and better insulating your home so that it consumes less energy. Going green can be important for both your personal finances and the well-being of the planet. For example, some people are able to heat and power their homes with the sun or wind. This saves them a fortune on bills while reducing air pollution. The U.S. government estimates that its Inflation Reduction Act can reduce carbon emissions by roughly 40% by 2030. That’s quite an ambitious target that would drastically help, among other things, boost the quality of the air we breathe, the water we drink, and the food we eat. The Inflation Reduction Act could help to save many households a great deal of money. It is the biggest climate spending package in U.S. history, and it would be foolish not to take advantage in some way if you have the financial means to do so. Those who qualify could get thousands of dollars handed to them to reap the benefits of cheaper electricity, heating, and car fuel. They’d also help to improve air quality, reduce global warming, and leave the planet in better shape for future generations. U.S. Energy Information Administration. "Short-Term Energy Outlook." AAA. "Drivers Find Relief at the Pump for Now." Council on Foreign Relations. "What the Historic U.S. Climate Bill Gets Right and Gets Wrong." Congress.gov. “H.R.5376 – Inflation Reduction Act of 2022.” Senate.Democrats.gov. "Summary: The Inflation Reduction Act of 2022." U.S. Department of the Treasury. “Frequently Asked Questions on the Inflation Reduction Act’s Initial Changes to the Electric Vehicle Tax Credit.” Consumer Reports. "EVs Offer Big Savings Over Traditional Gas-Powered Cars." Internal Revenue Service. “Energy Incentives for Individuals: Residential Property Updated Questions and Answers.” Deductions & Credits Government News News Government News Oil Savings
2023-03-29 16:22:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18325495719909668, "perplexity": 3395.616896861001}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00064.warc.gz"}
https://robotics.stackexchange.com/tags/slam/hot
# Tag Info ## Hot answers tagged slam Accepted ### What is inverse depth (in odometry) and why would I use it? Features like the sun and clouds and other things that are very far off would have a distance estimate of inf. This can cause a lot of problems. To get around it, the inverse of the distance is ... • 1,825 Accepted ### Difference between Rao-Blackwellized particle filters and regular ones The Rao-Blackwellized Particle Filter (RBPF) as you say in your question performs a marginalization of the probability distribution of your state space. The particle filter uses sampling to ... • 2,954 ### What is inverse depth (in odometry) and why would I use it? The inverse depth parameterisation represents a landmark's distance, d, from the camera exactly as it says, as proportional to 1/d within the estimation algorithm. The rational behind the approach is ... • 111 Accepted ### What does Simultaneous Localization And Mapping (SLAM) software do? Localization is the process of estimating the pose of the robot the environment. Number 5 in your list. Mapping is estimating the position of features in the environment. Number 1. 2, 3, and 4 are ... • 4,235 ### What is the difference between SAM and SLAM? SLAM is the process of locating oneself in a totally unknown environment where you are simultaneously mapping your environment and plotting your position in that environment. SAM is a technique used ... • 1,909 Accepted • 341 Accepted ### Are there any advantages to using a LIDAR for SLAM vs a standard RGB camera? My question: are there cases where you'd still need a LIDAR or can this expensive sensor be replaced with a standard camera? ... A each one of them has its advantages/disadvantages. Thus in some ... • 309 ### Difference between SLAM and Localization Localization is always done with respect to a map. SLAM(Simultaneous Localization and Mapping). As it is in the name, also does localization with respect to a map. The only difference is that the ... • 1,571 ### Is there any advantage to velocity motion models over odometry motion models for SLAM? I can't think of a reason why a velocity model (based on control commands) would be superior to an odometry model (which uses the actual wheel speeds). The lecture notes from Freiburg on motion ... • 2,954 ### Absolute positioning without GPS I know this is an old question but I will just add a bit to the currently existing answers. First, this is a very complex problem that everyone is trying to tackle, including google with their Tango ... ### What is inverse depth (in odometry) and why would I use it? Davison's paper introducing the method is easy enough to understand: Inverse Depth Parametrization for Monocular SLAM by Javier Civera, Andrew J. Davison, and J. M. Martınez Montiel DOI: 10.1109/TRO.... ### Hector SLAM, Matching algorithm Imagine someone put you in a wheelchair and blindfolded you, then let you reach your arm out and touch a wall. You could tell how far away the wall was, but as long as you were pushed parallel to the ... • 14.7k ### How to use SLAM with simple sensors The actual implementation of SLAM won't care about whether you are using high fidelity laser range finders or cheaper ultrasonic sensors. Both are providing range measurements with the biggest ... • 1,357 ### FastSlam 2.0 Implementation? I suggest you to give a look to Sebastian Thrun's work here. In fastSLAM (both 1.0 and 2.0) each particle maintains an array which contains the states of the landmarks as well as the robot's states. ... Accepted ### SLAM : Why is marginalization the same as schur's complement? See the walk-through The Schur complement helps with the closed form derivation but isn't necessary. It's just a nice convenient property of Gaussians and the covariance matrices. In these papers,... • 5,162 ### Understanding Drift in Simultaneous Localization and Mapping (SLAM) Your intuition is mostly correct. Returning to where you started and re-observing landmarks you mapped earlier is called closing the loop in the SLAM literature. As you mentioned, your uncertainty ... • 950 ### Pose-Graph-SLAM: How to create edges if only IMU-odometry is given? First off, it doesn't sound like you're actually doing SLAM. You didn't mention an exteroceptive sensor (e.g., laser, camera) that actually maps the environment. With just an IMU, you are doing ... • 950 ### What's the difference between the term "pose estimation" and "visual odometry"? It is also often the case that the author lacks knowledge, makes mistakes, or is adding unnecessary statements to their work. Just because it is published does not make it true. In this case though, ... • 4,235 ### Difference between SLAM and "3D reconstruction"? You are right about the sameness of SLAM and 3D reconstruction. At the same time I don't think the author is misclassifying. The english is a little non-standard. The author could have better said it ... • 4,235
2022-06-30 07:26:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3202303647994995, "perplexity": 1771.905385376681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00281.warc.gz"}
https://www.iacr.org/cryptodb/data/paper.php?pubkey=11593
## CryptoDB ### Paper: A Distributed and Computationally Secure Key Distribution Scheme Authors: Vanesa Daza Javier Herranz Carles Padró Germ\'an S\'aez URL: http://eprint.iacr.org/2002/069 Search ePrint Search Google In 1999, Naor, Pinkas and Reingold introduced schemes in which some groups of servers distribute keys among a set of users in a distributed way. They gave some specific proposals both in the unconditional and in the computational security framework. Their computationally secure scheme is based on the Decisional Diffie-Hellman Assumption. This model assumes secure communication between users and servers. Furthermore it requires users to do some expensive computations in order to obtain a key. In this paper we modify the model introduced by Naor et al., requiring authenticated channels instead of assuming the existence of secure channels. Our model makes the user's computations easier, because most computations of the protocol are carried out by servers, keeping to a more realistic situation. We propose a basic scheme, that makes use of ElGamal cryptosystem, and that fits in with this model in the case of a passive adversary. We then add zero-knowledge proofs and verifiable secret sharing to prevent from the action of an active adversary. We consider general structures (not only the threshold ones) for those subsets of servers that can provide a key to a user and for those tolerated subsets of servers that can be corrupted by the adversary. We find necessary combinatorial conditions on these structures in order to provide security to our scheme. ##### BibTeX @misc{eprint-2002-11593, title={A Distributed and Computationally Secure Key Distribution Scheme}, booktitle={IACR Eprint archive}, keywords={cryptographic protocols / Key distribution, secret sharing schemes.}, url={http://eprint.iacr.org/2002/069}, note={Proceedings of Information Security Conference, ISC'02. LNCS 2433, pp. 342--356 jherranz@mat.upc.es 12153 received 1 Jun 2002, last revised 11 Apr 2003}, author={Vanesa Daza and Javier Herranz and Carles Padró and Germ\'an S\'aez}, year=2002 }
2022-01-23 06:12:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4695097804069519, "perplexity": 1976.3036116400447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00093.warc.gz"}
http://pdglive.lbl.gov/DataBlock.action?node=M200M&home=MXXX030
# ${{\boldsymbol \eta}_{{b}}{(2S)}}$ MASS INSPIRE search VALUE (MeV) EVTS DOCUMENT ID TECN  COMMENT $9999.0$ $\pm3.5$ ${}^{+2.8}_{-1.9}$ 26k 1 2012 BELL ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ + hadrons • • • We do not use the following data for averages, fits, limits, etc. • • • $9974.6$ $\pm2.3$ $\pm2.1$ $11$ $\pm4$ 2, 3 2012 ${{\mathit \Upsilon}{(2S)}}$ $\rightarrow$ ${{\mathit \gamma}}$ hadrons 1  Assuming ${\Gamma}_{{\mathit \eta}_{{b}}{(2S)}}$ = 4.9 MeV. Not independent of the corresponding mass difference measurement. 2  Obtained by analyzing CLEO III data but not authored by the CLEO Collaboration. 3  Assuming ${\Gamma}_{{\mathit \eta}_{{b}}{(2S)}}$ = 5 MeV. Not independent of the corresponding mass difference measurement. References: DOBBS 2012 PRL 109 082001 Observation of the ${{\mathit \eta}_{{b}}{(2S)}}$ Meson in ${{\mathit \Upsilon}{(2S)}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \eta}_{{b}}{(2S)}}$ , ${{\mathit \eta}_{{b}}{(2S)}}$ $\rightarrow$ Hadrons and Confirmation of the ${{\mathit \eta}_{{b}}{(1S)}}$ Meson MIZUK 2012 PRL 109 232002 Evidence for the ${{\mathit \eta}_{{b}}{(2S)}}$ and Observation of ${{\mathit h}_{{b}}{(1P)}}$ $\rightarrow$ ${{\mathit \eta}_{{b}}{(1S)}}{{\mathit \gamma}}$ and ${{\mathit h}_{{b}}{(2P)}}$ $\rightarrow$ ${{\mathit \eta}_{{b}}{(1S)}}{{\mathit \gamma}}$
2019-12-16 04:51:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823788166046143, "perplexity": 2476.524299697366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00408.warc.gz"}
https://revbayes.github.io/tutorials/sse/classe.html
# State-dependent diversification with the ClaSSE model ## Introduction In the previous examples we have modeled all character state transitions as anagenetic changes. Anagenetic changes occur along the branches of a phylogeny, within a lineage. Cladogenetic changes, on the other hand, occur at speciation events. They represent changes in a character state that may be associated with speciation events due to increased reproductive isolation, for example colonizing a new geographic area or a shift in chromosome number. Note that it can be quite tricky to determine if a character state shift is a cause or a consequence of speciation, but we can at least test if state changes tend to occur in the same time window as speciation events. A major challenge for all phylogenetic models of cladogenetic character change is accounting for unobserved speciation events due to lineages going extinct and not leaving any extant descendants (Bokma 2002), or due to incomplete sampling of lineages in the present. Teasing apart the phylogenetic signal for cladogenetic and anagenetic processes given unobserved speciation events is a major difficulty. Commonly used biographic models like the dispersal-extinction-cladogenesis (DEC; Ree and Smith (2008)) simply ignore unobserved speciation events and so result in biased estimates of cladogenetic versus anagenetic change. This bias can be avoided by using the Cladogenetic State change Speciation and Extinction (ClaSSE) model (Goldberg and Igić 2012), which accounts for unobserved speciation events by jointly modeling both character evolution and the phylogenetic birth-death process. ClaSSE models extend other SSE models by incorporating both cladogenetic and anagenetic character evolution. This approach has been used to model biogeographic range evolution (Goldberg et al. 2011) and chromosome number evolution (missing reference). Here we will use RevBayes to examine biogeographic range evolution in the primates. We will model biogeographic range evolution similar to a DEC model, however we will use ClaSSE to account for speciation events unobserved due to extinction or incomplete sampling. ## Setting up the analysis ### Reading in the data Begin by reading in the observed tree. observed_phylogeny <- readTrees("data/primates_biogeo.tre")[1] Get the taxa in the tree. We’ll need this later on. taxa = observed_phylogeny.taxa() Now let’s read in the biogeographic range data. The areas are represented as the following character states: • 0 = 00 = the null state with no range • 1 = 01 = New World only • 2 = 10 = Old World only • 3 = 11 = both New and Old World For consistency, we have chosen to use the same representation of biogeographic ranges used in the \RevBayes biogeography/DEC tutorial. Each range is represented as both a natural number (0, 1, 2, 3) and a corresponding bitset (00, 01, 10, 11). The null state (state 0) is used in DEC models to represent a lineage that has no biogeographic range and is therefore extinct. Our model will include this null state as well, however, we will explicitly model extinction as part of the birth-death process so our character will never enter state 0. data_biogeo = readCharacterDataDelimited("data/primates_biogeo.tsv", stateLabels="0123", type="NaturalNumbers", delimiter="\t", headers=TRUE) Also we need to create the move and monitor vectors. moves = VectorMoves() monitors = VectorMonitors() ### Set up the extinction rates We are going to draw both anagenetic transition rates and diversification rates from a lognormal distribution. The mean of the prior distribution will be $\ln(\frac{\text{#Taxa}}{2}) / \text{tree-age}$ which is the expected net diversification rate, and the SD will be 1.0 so the 95\% prior interval ranges well over 2 orders of magnitude. num_species <- 424 # approximate total number of primate species rate_mean <- ln( ln(num_species/2.0) / observed_phylogeny.rootAge() ) rate_sd <- 1.0 The extinction rates will be stored in a vector where each element represents the extinction rate for the corresponding character state. We have chosen to allow a lineage to go extinct in both the New and Old World at the same time (like a global extinction event). As an alternative, you could restrict the model so that a lineage can only go extinct if it’s range is limited to one area. extinction_rates[1] <- 0.0 # the null state (state 0) extinction_rates[2] ~ dnLognormal(rate_mean, rate_sd) # extinction when the lineage is in New World (state 1) extinction_rates[3] ~ dnLognormal(rate_mean, rate_sd) # extinction when the lineage is in Old World (state 2) extinction_rates[4] ~ dnLognormal(rate_mean, rate_sd) # extinction when in both (state 3) Note \Rev vectors are indexed starting with 1, yet our character states start at 0. So \texttt{extinction_rate[1]} will represent the extinction rate for character state 0. Add MCMC moves for each extinction rate. moves.append( mvSlide( extinction_rates[2], weight=4 ) ) moves.append( mvSlide( extinction_rates[3], weight=4 ) ) moves.append( mvSlide( extinction_rates[4], weight=4 ) ) Let’s also create a deterministic variable to monitor the overall extinction rate. total_extinction := sum(extinction_rates) ### Set up the anagenetic transition rate matrix First, let’s create the rates of anagenetic dispersal: anagenetic_dispersal_13 ~ dnLognormal(rate_mean, rate_sd) # disperse from New to Old World 01 -> 11 anagenetic_dispersal_23 ~ dnLognormal(rate_mean, rate_sd) # disperse from Old to New World 10 -> 11 Now add MCMC moves for each anagenetic dispersal rate. moves.append( mvSlide( anagenetic_dispersal_13, weight=4 ) ) moves.append( mvSlide( anagenetic_dispersal_23, weight=4 ) ) The anagenetic transitions will be stored in a 4 by 4 instantaneous rate matrix. We will construct this by first creating a vector of vectors. Let’s begin by initalizing all rates to 0.0: for (i in 1:4) { for (j in 1:4) { r[i][j] <- 0.0 } } Now we can populate non-zero rates into the anagenetic transition rate matrix: r[2][4] := anagenetic_dispersal_13 r[3][4] := anagenetic_dispersal_23 r[4][2] := extinction_rates[3] r[4][3] := extinction_rates[2] Note that we have modeled the rate of 11 $\rightarrow$ 01 (3 $\rightarrow$ 1) as being the rate of going extinct in area 2, and the rate of 11 $\rightarrow$ 10 (3 $\rightarrow$ 2) as being the rate of going extinct in area 1. Now we pass our vector of vectors into the \cl{fnFreeK} function to create the instaneous rate matrix. ana_rate_matrix := fnFreeK(r, rescaled=false) ### Set up the cladogenetic speciation rate matrix Here we need to define each cladogenetic event type in the form [ancestor\_state, daughter1\_state, daughter2\_state] and assign each cladogenetic event type a corresponding speciation rate. The first type of cladogenetic event we’ll specify is widespread sympatry. Widespread sympatric cladogenesis is where the biogeographic range does not change; that is the daughter lineages inherit the same range as the ancestor. In this example we are not going to allow the speciation events like 11 $\rightarrow$ 11, 11, as it seems biologically implausible. However if you wanted you could add this to your model. Define the speciation rate for widespread sympatric cladogenesis events: speciation_wide_sympatry ~ dnLognormal(rate_mean, rate_sd) moves.append( mvSlide( speciation_wide_sympatry, weight=4 ) ) clado_events[1] = [1, 1, 1] # 01 -> 01, 01 clado_events[2] = [2, 2, 2] # 10 -> 10, 10 and assign each the same speciation rate: speciation_rates[1] := speciation_wide_sympatry/2 speciation_rates[2] := speciation_wide_sympatry/2 Subset sympatry is where one daughter lineage inherits the full ancestral range but the other lineage inherits only a single region. speciation_sub_sympatry ~ dnLognormal(rate_mean, rate_sd) moves.append( mvSlide( speciation_sub_sympatry, weight=4 ) ) Define the subset sympatry events and assign each a speciation rate: clado_events[3] = [3, 3, 1] # 11 -> 11, 01 clado_events[4] = [3, 1, 3] # 11 -> 01, 11 clado_events[5] = [3, 3, 2] # 11 -> 11, 10 clado_events[6] = [3, 2, 3] # 11 -> 10, 11 speciation_rates[3] := speciation_sub_sympatry/4 speciation_rates[4] := speciation_sub_sympatry/4 speciation_rates[5] := speciation_sub_sympatry/4 speciation_rates[6] := speciation_sub_sympatry/4 Allopatric cladogenesis is when the two daughter lineages split the ancestral range: speciation_allopatry ~ dnLognormal(rate_mean, rate_sd) moves.append( mvSlide( speciation_allopatry, weight=4 ) ) Define the allopatric events: clado_events[7] = [3, 1, 2] # 11 -> 01, 10 clado_events[8] = [3, 2, 1] # 11 -> 10, 01 speciation_rates[7] := speciation_allopatry/2 speciation_rates[8] := speciation_allopatry/2 Now let’s create a deterministic variable to monitor the overall speciation rate: total_speciation := sum(speciation_rates) Finally, we construct the cladogenetic speciation rate matrix from the cladogenetic event types and the speciation rates. clado_matrix := fnCladogeneticSpeciationRateMatrix(clado_events, speciation_rates, 4) Let’s view the cladogenetic matrix to see if we have set it up correctly: clado_matrix ### Set up the cladogenetic character state-dependent birth-death process For simplicity we will fix the root frequencies to be equal except for the null state which has probability of 0. root_frequencies <- simplex([0, 1, 1, 1]) rho is the probability of sampling species at the present: rho <- observed_phylogeny.ntips()/num_species Now we construct a stochastic variable drawn from the cladogenetic character state-dependent birth-death process. classe ~ dnCDCladoBDP( rootAge = observed_phylogeny.rootAge(), extinctionRates = extinction_rates, Q = ana_rate_matrix, delta = 1.0, pi = root_frequencies, rho = rho, condition = "time", taxa = taxa ) Clamp the model with the observed data. classe.clamp( observed_phylogeny ) classe.clampCharData( data_biogeo ) ### Finalize the model Just like before, we must create a workspace model object. mymodel = model(classe) \subsection{Set up and run the MCMC} First, set up the monitors that will output parameter values to file and screen. monitors.append( mnModel(filename="output/primates_ClaSSE.log", printgen=1) ) monitors.append( mnJointConditionalAncestralState(tree=observed_phylogeny, cdbdp=classe, type="NaturalNumbers", printgen=1, withTips=true, withStartStates=true, filename="output/anc_states_primates_ClaSSE.log") ) monitors.append( mnScreen(printgen=1, speciation_wide_sympatry, speciation_sub_sympatry, speciation_allopatry, extinction_rates) ) Now define our workspace MCMC object. mymcmc = mcmc(mymodel, monitors, moves) We will perform a pre-burnin to tune the proposals and then run the MCMC. Note that for a real analysis you would want to run the MCMC for many more iterations. mymcmc.burnin(generations=200,tuningInterval=5) mymcmc.run(generations=1000) ## Summarize ancestral states When the analysis has completed you now summarize the ancestral states. The ancestral states are estimated both for the “beginning” and “end” state of each branch, so that the cladogenetic changes that occurred at speciation events are distinguished from the changes that occurred anagenetically along branches. Make sure the include_start_states argument is set to true. anc_states = readAncestralStateTrace("output/anc_states_primates_ClaSSE.log") anc_tree = ancestralStateTree(tree=observed_phylogeny, ancestral_state_trace_vector=anc_states, include_start_states=true, file="output/anc_states_primates_ClaSSE_results.tree", burnin=0, summary_statistic="MAP", site=0) ### Plotting ancestral states Like before, we’ll plot the ancestral states using the RevGadgets R package. Execute the script plot_anc_states_ClaSSE.R in R. The results can be seen in Figure \ref{fig:results_ClaSSE}. The maximum a posteriori (MAP) estimate for each node is shown as well as the posterior probability of the states represented by the size of the dots. library(RevGadgets) tree_file = "output/anc_states_primates_ClaSSE_results.tree" plot_ancestral_states(tree_file, summary_statistic="MAPRange", tip_label_size=3, tip_label_offset=1, xlim_visible=c(0,100), node_label_size=0, shoulder_label_size=0, include_start_states=TRUE, show_posterior_legend=TRUE, node_size_range=c(4, 7), alpha=0.75) output_file = "RevBayes_Anc_States_ClaSSE.pdf" ggsave(output_file, width = 11, height = 9) # Exercise • Using either R or Tracer, visualize the posterior estimates for different types of cladogenetic events. What kind of speciation events are most common? • As we have specified the model, we did not allow cladogenetic long distance (jump) dispersal, for example 01 $\rightarrow$ 01, 10. Modify this script to include cladogenetic long distance dispersal and calculate Bayes factors to see which model fits the data better. How does this affect the ancestral state estimate? } 1. Bokma F. 2002. Detection of punctuated equilibrium from molecular phylogenies. Journal of Evolutionary Biology. 15:1048–1056. 10.1046/j.1420-9101.2002.00458.x 2. Goldberg E.E., Lancaster L.T., Ree R.H. 2011. Phylogenetic Inference of Reciprocal Effects between Geographic Range Evolution and Diversification. Systematic Biology. 60:451–465. 10.1093/sysbio/syr046 3. Goldberg E.E., Igić B. 2012. Tempo and Mode in Plant Breeding System Evolution. Evolution. 66:3701–3709. 10.1111/j.1558-5646.2012.01730.x 4. Ree R.H., Smith S.A. 2008. Maximum Likelihood Inference of Geographic Range Evolution by Dispersal, Local Extinction, and Cladogenesis. Systematic Biology. 57:4–14. 10.1080/10635150701883881
2022-05-18 13:04:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4798080325126648, "perplexity": 5513.959301135202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00561.warc.gz"}
https://math.stackexchange.com/questions/3231009/is-this-valid-when-deriving-quadratic-equation
# Is this valid when deriving quadratic equation? When deriving the quadratic formula, isn't the square root of $$(x+\frac{b}{2a})^2$$ the absolute value of $$(x+\frac{b}{2a})$$? It's usually just represented as $$(x+\frac{b}{2a})$$ without absolute value and then $$\frac{b}{2a}$$ is subtracted from left side and boom, theres the quadratic formula. I just don't understand why its not absolute value of $$(x+\frac{b}{2a})$$. For example the square root of $$x^2$$ is the absolute value of $$x$$, which is equal to $$\pm{x}$$. Sorry if this is confusing its more of a conceptual thing. Thank you in advance. philalethesnew • There is the $\pm$ sign... – Botond May 18 at 20:08 • You would be equating it to a "plus or minus" square root though, right? If so, this is fine. For example, if $x^2=2$, then $x=\pm\sqrt{2}$. – Minus One-Twelfth May 18 at 20:08 • why isn't it represented as |x|=+-sqrt2 instead its written as x=+-sqrt2. Maybe I am just being picky or missing something – philalethesnew May 18 at 20:13 • would it be incorrect to write it as |x|=+-sqrt2 or does it need to be represented as x=+-sqrt2 – philalethesnew May 18 at 20:16 • It would be wrong to write $|x|=\pm\sqrt{2}$. This is because $|x|$ is never negative. What is correct is that $|x|=\sqrt{2}$. From this, you can conclude that $x=\pm\sqrt{2}$. – Minus One-Twelfth May 18 at 20:21 ## 4 Answers You seem to be asking why after the step $$\;\left(x+\cfrac b{2a}\right)^2=\cfrac{b^2-4ac}{4a^2}\;$$ , they don't go to $$\left|x+\cfrac b{2a}\right|=\sqrt{\frac{b^2-4ac}{4a^2}}$$ But we actually do! To write the above is exactly the same as to write $$x+\cfrac b{2a}=\pm\sqrt\frac{b^2-4ac}{4a^2}$$ just as $$\;x^2=a\implies |x|=a\;\; (\,a\ge 0)\;$$ is exactly the same as $$\;x^2=a\implies x=\pm a\;\;(a\ge0)\;$$ , under the assumption, of course, that once we take the + sign and the second time we take the - sign. • :) very helpful – philalethesnew May 18 at 21:09 • Hey Don. Thanks for help but i have to clarify. Out of your three main expressions, should the right side of the 1st expression be over 4a^2, not 2a? Same with your second expression, it should be over 4a^2. Only in your 3rd expression should the denominator become 2a and the numerator should be the square root of b^2-4ac? Am I correct? – philalethesnew May 19 at 1:04 • the third expression is missing the square root of b^2-4ac, not just b^2-4ac – philalethesnew May 19 at 1:20 • @philalethesnew Good catch, thanks. Edited. – DonAntonio May 19 at 8:18 Well actually it is done. $$ax^2 + bx + c = 0$$ $$\implies x^2+\frac{bx}{a} +\frac{c}{a} = 0$$ Now completing the square: $$\left(x+\frac{b}{2a}\right)^2= \frac{b^2-4ac}{4a^2}$$ Now since we know that, $$x^2 = a, \quad a > 0$$ has two solutions, $$x=\pm a$$. Now if $$b^2 - 4ac > 0$$, there are two solutions indeed! $$x= \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$. Does that help? • After you write (x+(b/2a))^2=(b^2-4ac)/4a^2 – philalethesnew May 18 at 20:21 • shouldn't the square root of the left side be written as |x+(b/2a)| not just x+(b/2a) – philalethesnew May 18 at 20:22 • Yeah, what about it? – Vizag May 18 at 20:22 • Read Minus One Twelfths comment below the question. – Vizag May 18 at 20:23 You can look at it differently: $$ax^2+bx+c=0 \iff \\ x^2+\frac bax+\frac ca=0\iff \\ \left(x+\frac b{2a}\right)^2-\frac{b^2}{4a^2}+\frac{4ac}{4a^2}=0 \iff \\ \left(x+\frac b{2a}\right)^2-\left(\frac{\sqrt{b^2-4ac}}{2a}\right)^2=0 \iff \\ \left[\left(x+\frac b{2a}\right)-\frac{\sqrt{b^2-4ac}}{2a}\right]\cdot \left[\left(x+\frac b{2a}\right)+\frac{\sqrt{b^2-4ac}}{2a}\right]=0 \iff \\ \left(x+\frac b{2a}\right)-\frac{\sqrt{b^2-4ac}}{2a}=0 \ \ \text{or} \ \ \left(x+\frac b{2a}\right)+\frac{\sqrt{b^2-4ac}}{2a}=0 \iff \\ x=\frac{-b+\sqrt{b^2-4ac}}{2a} \ \ \text{or} \ \ x=\frac{-b-\sqrt{b^2-4ac}}{2a} \iff \\ x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$$ When they get to point of having $$\bigl{(}x+\frac{b}{2a}\bigr{)}^2 =-\frac{c}{a}+\bigl{(}\frac{b}{2a}\bigr{)}$$ the square root of both sides has no absolute value sign on the left because the right side can be $$\pm$$. $$\biggl{|}x+\frac{b}{2a}\biggr{|}\ne\sqrt{-\frac{c}{a}+\biggl{(}\frac{b}{2a}\biggr{)}}$$ One complete derivation explained simply is shown here. • You should clarify on the usage of $\pm$, the square root does not(!) include $\pm$ just because it is the square root. – Hirshy May 19 at 20:24
2019-08-18 19:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309073448181152, "perplexity": 701.1815381965915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00512.warc.gz"}
http://mathoverflow.net/feeds/question/105899
Power series with double zeros - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T18:37:09Z http://mathoverflow.net/feeds/question/105899 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/105899/power-series-with-double-zeros Power series with double zeros Jörg Neunhäuserer 2012-08-30T00:51:22Z 2012-08-31T02:25:36Z <p>How many power series of the form $1+\sum_{k=1}^{\infty} a_{k}x^{k}$ with <code>$a_{k}\in \{-1,0,1 \}$</code>, that have a double zero $f(x)=f'(x)=0$ in $(0,1)$, are there. Ok, there are many ways to understand the question: set theoretical, topological, measure theoretical. I would be especially interested in the Bernoulli measures of the coefficient space <code>$C\subseteq \{-1,0,1\}^{\mathbb{N}}$</code> of such series. </p> http://mathoverflow.net/questions/105899/power-series-with-double-zeros/105902#105902 Answer by Igor Rivin for Power series with double zeros Igor Rivin 2012-08-30T01:39:21Z 2012-08-30T02:20:13Z <p>At least the set-theoretical question can be answered: the are the cardinality of the continuum many such series, as can be deduced from the results in this paper (not all of them attributed by the authors to themselves):</p> <p>MR2293600 (2007k:30003) Reviewed Shmerkin, Pablo(FIN-JVS-MS); Solomyak, Boris(1-WA) Zeros of {−1,0,1} power series and connectedness loci for self-affine sets. (English summary) Experiment. Math. 15 (2006), no. 4, 499–511. </p> http://mathoverflow.net/questions/105899/power-series-with-double-zeros/105953#105953 Answer by Alexander Shamov for Power series with double zeros Alexander Shamov 2012-08-30T15:08:59Z 2012-08-30T17:02:22Z <p>I am going to address the question for $\mathrm{Bernoulli}(1/2)$ measures, using probabilistic language. This is not a complete answer, but I am trying to relate your question to the properties of the distribution of $f(x)$. Clearly, for $x&lt;1/2$ we never even reach zero, but my guess is that for $x>1/2$ this distribution is absolutely continuous, though I am unable to prove this at the moment.</p> <p>So formally, at least,</p> <p>$\displaystyle \mathsf{E} \, \sum_{f(x)=0} \mathsf{1}\{|f^\prime(x)| &lt; \epsilon\} = \intop_0^1 \mathsf{E} \, \delta(f(x)) \mathsf{1}\{|f(x)|&lt;\epsilon\} |f^\prime(x)| dx \le \epsilon \intop_0^1 \mathsf{E} \, \delta(f(x)) dx$.</p> <p>$\mathsf{E} \, \delta$ is the density at zero, and it can be made perfect sense of, provided that the law of $f(x)$ has continuous density at zero. I don't know whether it has continuous density, but if we manage to prove that $f(x)$ has at least bounded density for $x>1/2$, then we can write inequalities with approximations of $\delta$ to get the same results...</p> http://mathoverflow.net/questions/105899/power-series-with-double-zeros/106007#106007 Answer by Robert Israel for Power series with double zeros Robert Israel 2012-08-30T23:56:20Z 2012-08-30T23:56:20Z <p>Some more examples with polynomials:</p> <p>$$\matrix{\left( {z}^{6}+{z}^{5}-{z}^{3}+z+1 \right) \left( z+{z}^{4}-1 \right) ^{2}\cr \left( {z}^{8}+{z}^{7}-{z}^{5}-{z}^{4}-{z}^{3}+z+1 \right) \left( z+ {z}^{6}-1 \right) ^{2}\cr \left( {z}^{9}+{z}^{8}-{z}^{6}-{z}^{5}-{z}^{4}-{z}^{3}+z+1 \right) \left( z+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z}^{2}+{z}^{5}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{2}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{3}-z+1 \right) \left( {z}^{3}+{z}^{4}-1 \right) ^{2}\cr \left( {z}^{7}-{z}^{5}+{z}^{4}+{z}^{3}-{z}^{2}+1 \right) \left( {z}^ {3}+{z}^{5}-1 \right) ^{2}\cr \left( -{z}^{10}+{z}^{8}-{z}^{7}+{z}^{6}+{z}^{5}-2\;{z}^{4}+{z}^{3}-z +1 \right) \left( {z}^{3}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z}^{4}+{z}^{5}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{2}+1 \right) \left( {z}^{4}+{z}^{6}-1 \right) ^{ 2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{4}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{8}-{z}^{7}+{z}^{5}-{z}^{4}+{z}^{3}-z+1 \right) \left( {z }^{5}+{z}^{6}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{6}+{z}^{7}-1 \right) ^{2}\cr \left( -{z}^{14}-{z}^{13}-2\;{z}^{12}-{z}^{11}+{z}^{9}+2\;{z}^{8}-2\;{z}^{5}+2\;{z}^{2}+z+1 \right) \left( {z}^{5}-{z}^{3}+{z}^{2}+z-1 \right) ^{2}\cr \left( {z}^{5}+{z}^{4}-{z}^{3}-{z}^{2}+z+1 \right) \left( {z}^{5}+{z }^{3}-{z}^{2}+z-1 \right) ^{2}\cr \left( {z}^{15}+{z}^{14}-{z}^{11}-{z}^{10}+{z}^{9}+{z}^{8}+{z}^{7}+{z }^{6}-{z}^{5}-{z}^{4}+z+1 \right) \left( {z}^{5}-{z}^{4}+{z}^{3}+z-1 \right) ^{2}\cr }$$</p>
2013-05-19 18:37:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934195637702942, "perplexity": 870.7999143405984}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
https://samuel-lereah.com/db/proofdb/Fundamental%20theorem%20of%20calculus
A bit of everything # Fundamental theorem of calculus The fundamental theorem of calculus tells us that, given some function $f$ of a given regularity class, then, if there exists a function $F$ such that $F' = f$, we have $$\int_a^b f(x) dx = [F(x)]_a^b = F(b) - F(a)$$ ## In the Riemann integral In the case of the Riemann integral, we consider a partition of the integration interval $[a,b]$, that is, we consider some set $\{ x_i \}$, $0 \leq i \leq N$, with $x_0 = a$, $x_N = b$, and for every value of $i$, $$x_i < x_{i+1}$$ The lower and upper Riemann sum of that partition $P$ are defined as \begin{eqnarray} L_f &=& \sum_{i = 1}^N (x_i - x_{i-1}) \min_{x \in [x_{i-1}, x_i]} f(x)\\ U_f &=& \sum_{i = 1}^N (x_i - x_{i-1}) \min_{x \in [x_{i-1}, x_i]} f(x) \end{eqnarray}
2023-01-29 15:51:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9997865557670593, "perplexity": 292.4601191482091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00092.warc.gz"}
https://gitlab.math.tu-dresden.de/backofen/amdis/-/blame/836978e927e4a0835595929a910011f6e36bb51f/AMDiS/lib/mtl4/boost/numeric/itl/updater/dfp.hpp
dfp.hpp 1.46 KB Praetorius, Simon committed Oct 13, 2016 1 // Software License for MTL Praetorius, Simon committed Oct 14, 2016 2 3 // // Copyright (c) 2007 The Trustees of Indiana University. Praetorius, Simon committed Oct 13, 2016 4 // 2008 Dresden University of Technology and the Trustees of Indiana University. Praetorius, Simon committed Oct 14, 2016 5 // 2010 SimuNova UG (haftungsbeschränkt), www.simunova.com. Praetorius, Simon committed Oct 13, 2016 6 7 // All rights reserved. // Authors: Peter Gottschling and Andrew Lumsdaine Praetorius, Simon committed Oct 14, 2016 8 // Praetorius, Simon committed Oct 13, 2016 9 // This file is part of the Matrix Template Library Praetorius, Simon committed Oct 14, 2016 10 // Praetorius, Simon committed Oct 13, 2016 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 // See also license.mtl.txt in the distribution. #ifndef ITL_DFP_INCLUDE #define ITL_DFP_INCLUDE #include #include #include namespace itl { /// Update of Hessian matrix for e.g. Quasi-Newton by Davidon, Fletcher and Powell formula struct dfp { /// \f$H_{k+1}=B_{k+1}^{-1}=H_k+\frac{s_k\cdot s_k^T}{y_k^T\cdot s_k}- \frac{H_k\cdot y_k\cdot y_k^T\cdot H_k^T}{y_k^TH_k\cdot y_k}\f$ template void operator() (Matrix& H, const Vector& y, const Vector& s) { typedef typename mtl::Collection::value_type value_type; assert(num_rows(H) == num_cols(H)); Vector h(H*y); value_type gamma= 1 / dot(y,s), alpha= 1 / dot(y,h); MTL_THROW_IF(gamma == 0.0, unexpected_orthogonality()); MTL_THROW_IF(alpha == 0.0, unexpected_orthogonality()); Matrix A(alpha * y * trans(y)), H2(H - H * A * H + gamma * s * trans(s)); swap(H2, H); // faster than H= H2 } Praetorius, Simon committed Oct 14, 2016 40 }; Praetorius, Simon committed Oct 13, 2016 41 42 43 44 45 46 } // namespace itl #endif // ITL_DFP_INCLUDE
2022-01-23 01:21:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31528440117836, "perplexity": 14432.13330079231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00277.warc.gz"}
http://human-web.org/Oklahoma/error-exponents.html
Address 217 Oak Tree Dr, Oklahoma City, OK 73130 (405) 610-6461 # error exponents Luther, Oklahoma Thus, combining everything and introducing some ρ ∈ [ 0 , 1 ] {\displaystyle \rho \in [0,1]\,} , have that P ( E | S i ) ≤ P ( ⋃ In order to minimize the probability of error the decoder will decode to the source sequence X 1 n {\displaystyle X_{1}^{n}} that maximizes P ( X 1 n | A m Thus, if P ( X 1 n ( i ′ ) ) ≥ P ( X 1 n ( i ) ) , {\displaystyle P(X_{1}^{n}(i'))\geq P(X_{1}^{n}(i))\,,} then P ( X 1 Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.8/ Connection Let M = e n R {\displaystyle M=e^{nR}\,\!} be the total number of possible messages. The inequality holds in the other case as well because ( P ( X 1 n ( i ′ ) ) P ( X 1 n ( i ) ) ) Your cache administrator is webmaster. Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) If X 1 n ( 1 ) {\displaystyle X_{1}^{n}(1)\,} was generated at the source, but P ( X 1 n ( 2 ) ) > P ( X 1 n ( In practical situations, there are limitations to the delay of the communication and the block length must be finite. Finally applying this upper bound to the summation for P ( E ) {\displaystyle P(E)\,} have that: P ( E ) = ∑ i P ( E | S i ) Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.5/ Connection Your cache administrator is webmaster. according to some probability distribution with probability mass function Q. Please try the request again. By using this site, you agree to the Terms of Use and Privacy Policy. Let S i {\displaystyle S_{i}\,} denote the event that the source sequence X 1 n ( i ) {\displaystyle X_{1}^{n}(i)} was generated at the source, so that P ( S i Let A i ′ {\displaystyle A_{i'}\,} denote the event that the source sequence X 1 n ( i ′ ) {\displaystyle X_{1}^{n}(i')} was mapped to the same message as the source Please try the request again. Please try the request again. Please try the request again. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity. Thus, letting X i , i ′ {\displaystyle X_{i,i'}\,} denote the event that the two source sequences i {\displaystyle i\,} and i ′ {\displaystyle i'\,} map to the same message, we You can help by adding to it. (June 2008) See also Source coding Channel coding References R. Your cache administrator is webmaster. The system returned: (22) Invalid argument The remote host or network may be down. You can help by adding to it. (June 2008) Error exponent in source coding For time invariant discrete memoryless sources The source coding theorem states that for any ε > 0 The system returned: (22) Invalid argument The remote host or network may be down. Using the Hokey union bound, the probability of confusing X(1) with any message is bounded by: P e r r o r   1 → a n y ≤ M ρ Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection source such as X {\displaystyle X} and for any rate less than the entropy of the source, there is large enough n {\displaystyle n} and an encoder that takes n {\displaystyle When a source is generated the corresponding message M = m {\displaystyle M=m\,} is then transmitted to the destination. Letting E 0 ( ρ ) = ln ⁡ ( ∑ x i P ( x i ) 1 1 + ρ ) ( 1 + ρ ) , {\displaystyle E_{0}(\rho Comments: 6 pages, 4 figures, accepted IEEE Transactions on Information Theory Subjects: Information Theory (cs.IT) Citeas: arXiv:1212.1098 [cs.IT] (or arXiv:1212.1098v1 [cs.IT] for this version) Submission history From: Albert Guillen i Next map each of the possible source output sequences to one of the messages randomly using a uniform distribution and independently from everything else. Each component in the codebook is drawn i.i.d. The system returned: (22) Invalid argument The remote host or network may be down. Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.6/ Connection Therefore, it is important to study how the probability of error drops as the block length go to infinity. Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection Generated Mon, 10 Oct 2016 00:25:24 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.7/ Connection This reduction follows from the fact that the messages were assigned randomly and independently of everything else. Thus, as an example of when an error occurs, supposed that the source sequence X 1 n ( 1 ) {\displaystyle X_{1}^{n}(1)} was mapped to message 1 {\displaystyle 1} as was
2019-06-19 21:59:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512494564056396, "perplexity": 1579.607853092291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00391.warc.gz"}