text
stringlengths 430
5.77M
|
---|
Oil...
Another filing reveals that Warren Buffett's sales from his controversial PetroChina stake are accelerating. Is he going to get rid of all his holdings amid pressure from human rights activists?
Warren Buffett's Berkshire Hathaway holding company has cuts its controversial stake in PetroChina to just under 8%. His motives for selling remain a matter of speculation between making money and making a statement.
Warren Buffett's Berkshire Hathaway has once again trimmed its stake in PetroChina. The sale of 28 million shares for roughly $40 million reduces his stake to 8.93 percent from just over 11 percent earlier this year. The selling comes amid calls by human-rights activists for Buffett to divest from PetroChina due to the government-controlled company's ties with Sudan. Buffett, however, has said he couldn't influence the Chinese if he wanted to and most analysts think he's locking in profits.
We don't know for sure why Warren Buffett's Berkshire Hathaway has been reducing its stake in PetroChina, but he's been selling more shares than you might have thought. One group that's been urging Buffett to divest as a protest against China's "funding of the genocide in Darfur" thinks there's a message in the "steady series" of sales.
Warren Buffett's Berkshire Hathaway has sold some more of its stake in PetroChina. The question is: Is Buffett selling for the profits or to make a statement against China's human-rights record in Darfur?
Just one day after a Wall Street Journal report that Warren Buffett is buying shares of Kraft Foods, we get word today that he's sold a small slice of his stake in PetroChina, the big Chinese oil company. The AP reports there's "no indication whether he was responding to demands by activists to cut his ties to the company due to its investments in Sudan." But the very small size of the sale, about 17 million shares worth just $27 million, would appear to make it a slight adjustment rather than any kind of message.
The Save Darfur Coalition has turned its attention to Berkshire Hathaway and Fidelity Investments. |
Wikipedia’s entry on partnership begins with “Since humans are social beings, partnerships between individuals, businesses, interest-based organizations, schools, governments, and varied combinations thereof, have always been and remain commonplace.” The real nugget in this is the term social beings – no matter what kind of partnership you might be contemplating entering into, or be lucky enough to have already established – bottom line a partnership is a relationship, whether a commercial one or a personal one, it’s the relationship that needs to work to achieve success.
In a business context there are plenty of great posts out there on the pros and cons of working as a “solopreneur” vs having a business partner, for example Rebekah Campbell from Posse who is a sole founder wrote a fabulous post on the pros & cons of each, and this article on the loneliness of starting up as a sole founder provides another interesting perspective. Today however I am going to talk about what makes a good business ownership partnership work rather than why you might enter into one.
Shane and I are lucky enough to have known each other for a very long time, we’ve worked together before and had loads of hair brained schemes before we launched into business together. As a result we have a strong understanding of each others strengths, weaknesses, how to wind each other up and importantly how to argue + move on quickly. Not every business partnership has the luxury of having known each other for as long as we have but there are a few very important ingredients for a successful partnership that you can identify quickly:
- Moral and Ethical basis = at a core foundation of your relationship for it to really work both (or all) business partners need to share the same moral and ethical basis. Knowing the other person’s bottom line is really important, if these don’t line up you will have problems down the track. For instance if one partner wants to run a pyramid scheme and the other wants to give 30% of your profits to charity, or one wants to trade in coal while the other wants to save the planet. Our bottom line is “don’t be dodgy” we live by that statement and it is reflected in OptimalBI’s core values.
- Trust = it’s cliche to say you need to trust your business partner, trust them with your shared resources (usually $$), trust them to do their job not stepping all over their role or responsibilities, trust them to make decisions for you when it’s necessary (and respect them enough to live with that decision) but it’s really important. This trust needs to extend to your significant others too – if your life partner doesn’t trust your business partner then you will have some real issues to deal with at home too.
- Philosophy towards money = business partners usually have complimentary skills and styles, that’s why they work well together, but another thing you should be on the same page with is your philosophy towards money. I’m not talking about discretionary spending or expenses rather how money influences you both, whether it’s a driver or a bi-product of owning this business? is being generous important (generous with your staff and charities for instance) or is your motivation to get as much cash in your pocket as you possibly can? Philosophy towards money breaks up marriages and can equally cause strong rifts in any working business relationship.
- Exit Strategy = you don’t need to write it down or have an exit plan but sharing a common view of what your ultimate exit strategy might be is also really important, one partner might be building a business for them to be employed by for the rest of their lives while the other wants to sell and make a quick buck, huge difference.
Entering into a business partnership is a long term commitment, seriously you might own this business for 10 years, and if you are starting up your new business partner is someone you will spend many hours working with so establishing a good framework for decision making and communicating enough (think over communicate) is vital. If there are more then 2 partners this is magnified so make sure all of you are on the same page, we had a 3rd partner for a short time and quickly found Shane and I were communicating & moving forward at a different pace leaving our other partner behind – something we recognised and resolved quite quickly.
For me I couldn’t do this alone because I need to bounce everything of others to validate my thoughts and ideas, we don’t always agree (make that don’t often agree) but that’s part of the fun! Next chapter in this series is on Joint Ventures. Happy sharing. Vic. |
Nazem Kadri: The Right Pick in 2009
Imagine for a second that your NHL team could have a do-over – a chance to reanalyze their possibilities in any given National Hockey League entry draft. Then how would […]
Imagine for a second that your NHL team could have a do-over – a chance to reanalyze their possibilities in any given National Hockey League entry draft. Then how would […]
No matter what team you root for or which sport is your favorite, if you are anything like me you are always wondering who is next. Who is going […]
Over the last several weeks, The Hockey Writers has been bringing you a series of posts detailing some fantasy hockey pickups of lesser-owned players. Despite the fact that some of […]
I am sure after such a long road trip the New York Rangers are happy to be home. I am also sure that they didn’t envision their first home game […]
It has been 19 years since the New York Rangers ended their ‘curse’ (if you believe in those kind of things) and won the Stanley Cup in 1994 after […]
In 1971, the rock band ‘The Who’ released the song “Won’t Get Fooled Again.” While the lyrics suggest this to be a political, call to action tune, it certainly could pertain to New York Rangers fans this season.
The old adage is that you don’t mess with the lineup when you’re winning games. Sometimes, though, a minor change can help build upon solid efforts. That’s what the Rangers […]
When Glen Sather took over the reins as New York Rangers General Manager in 2000, he and his staff did not initially light it up at the NHL entry draft. […]
What a difference a year makes. Chris Kreider burst onto the Broadway stage in the 2012 playoffs. After winning a National Championship with Boston College, the Rangers’ first round pick […]
The sprint to the NHL playoffs begins Saturday, and with little time to prepare for the season, things could get quite interesting sooner rather than later. Predicting what may happen […]
As future hall of famers Teemu Selanne and Daniel Alfredsson are getting ready to start their farewell tours throughout NHL arenas, fans are getting ready to welcome in the next […]
With the lockout over, there are still questions surrounding the New York Rangers as they enter the 2013 season. Here are a few of them.
Widget Title |
We recently reviewed a bunch of macro lenses for the Canon EF and EF-S mounts. These include both proprietary Canon lenses as well as third party lenses. Today, we shall be looking at the best macro lenses for Nikon f-mount.
We should have said best micro because that is what Nikon refers to its close focusing lenses as. Micro is to Nikon what macro is to Canon. Nikon has several good lenses that allow you to get in pretty close and capture a 1:1 (life-size) perspective of a subject. Let’s begin with the top lenses manufactured by Nikon, and then we can move on to some third party solutions.
Original Nikon Made Lenses
1. Nikon AF-S Micro-Nikkor 105 mm 1:2.8G VR
The Nikon AF-S Micro-Nikkor 105 is the auto-focusing version of the older lens (the Nikkor 105mm f/2.8 without autofocus build in 1983). If you need auto-focusing, this is the lens that you should opt for (also, the old model is no more available).
This new Nikkor macro lens feels solid in the hands. It is compatible with all of Nikon’s modern auto-focusing DSLRs as well as most of older 35mm film cameras. On smaller crop senses cameras the crop factor of 1.5x extends the effective focal length to a 35mm format equivalent of 157.5mm.
It is quite well made and weighs 720 grams. The metal frame of the lens feels solid in the hands. The internal construction of the lens includes a total of 14 elements arranged in 12 groups. There are 9 rounded aperture diaphragm blades that produce nice bokeh.
Auto-focusing on the lens is powered by Nikon’s Silent Wave Motor (SWM). Typical of modern auto-focusing mechanisms this is very quiet. The lens also comes with a manual focusing override option. That means you can grab hold of the focus ring and manually adjust focus even in AF mode. Still on the subject of focusing the lens focuses internally which means there is no barrel length change when the lens focuses.
This is a true macro lens, meaning this lens will produce 1:1 or life-size reproduction of anything that you can aim it at from a close distance. Also being a medium tele macro lens the 105mm allows you to close focus from a sufficient working distance. This ensures that you will not be scaring the living daylights out of your small subjects. Also, this means you will not obstruct the light with your own body and or the lens camera combination.
Vibration Reduction has been provided in the lens. VR is to Nikon what O.I.S. is to Canon. The VR on this lens is rated to 3-stops. That means you can use up to three stops slower shutter speed compared to other non-VR lenses in a given lighting situation.
To top it all this lens doubles up as a great portrait lens as well. The lens works as a great solution if you are out in the field shooting macro and all of a sudden you have a great portrait making opportunity. Don’t fret if you forget your portrait lens back home. The AF-S Micro-Nikkor 105 mm 1:2.8G VR will work just fine. Especially, if you are on a full-frame camera.
- Designed for close-up and macro photography; versatile enough for virtually any photographic situation
- Maximum Angle of View (FX-format): 23°20'.Features new VR II vibration reduction technology, Focal Length : 105 mm, Minimum...
- Nano-Crystal coat and ED glass elements that enhance overall image quality by further reducing flare and chromatic...
- Includes an internal focus, which provides fast and quiet auto-focusing without changing the length of the lens. Maximum...
- Weighs 279 ounces, and measures 33 x 45 inches; Made In China ;5-Year Warranty (1-Year International + 4-Year USA Extension)
Related Post: Best Portrait Lenses for Full-Frame Cameras (Top 13 in 2018)
2. Nikon AF-S DX Micro-Nikkor 40mm 1:2.8G
The AF-S DX Micro-Nikkor 40mm 1:2.8G is a very inexpensive microlens from Nikon designed for the smaller image circle of Nikon’s APS-C sensor powered cameras. On a DX (Nikon’s APS-C system cameras) camera, the effective focal length of the camera is 60mm. Which is just a shade longer than what you would get with a 35mm prime lens. Thus, this lens will double up as a fixed prime lens that you can shoot everyday photos as well as lots of macro photography. That said, you will never regret the quality of your photos.
Don’t bother mounting this on a full-frame camera as the lens will not utilize the whole sensor real estate and the resulting loss of resolution will not be worth it. If you don’t turn on DX crop vignetting will destroy your compositions (on full-frame).
This is a very lightweight lens, weighing just 235 grams. Build quality is decent, nothing out of the ordinary. Lots of plastic. Which is expected. But it is good quality plastic and the lens feels quite solid and well built in the hands.
Related Post: Best Nikon Lenses (for Video Shooting)
The lens opens up to a maximum aperture of/2.8. On the other side, it can stop down to a minimum of f/22. You would be shooting stopped down anyways for getting that large depth of field. The lens is very sharp in the middle and will allow you to capture those breathtaking flower shots and small creepy crawlies in life-size proportions. As well as anything else that you may fancy shooting.
The lens constitutes a total of 9 elements arranged in 7 groups. Super Integrated coating has been provided in the lens. SIC has the ability to cut down on flares and ghosting when shooting wide open, especially when shooting in backlit situations. The lens diaphragm is made up of 7 blades. You get decent enough bokeh but not as good as some of the other lenses we have listed here.
The lens has some amount of weather sealing. Rubber gasket has been provided on the lens mount to ensure that the elements stay out. That said, the lens is not 100% weather sealed.
Auto-focusing on the lens is powered by Nikon’s Silent Wave Motor (SWM) technology. Also integrated into the lens is the Close-Range Correction (CRC) system. This system allows the lens’ focusing elements to move independently of each other. The result of this is that the lens is able to achieve a much better auto-focusing performance, especially when working at close distances.
This is a true macro lens as it gives 1:1 perspective from a close focusing distance. The minimum focusing distance of the lens is 6.4″. The lens comes with a manual focusing override button which ensures that the focusing can be manually adjusted and fined tuned even in auto-focusing mode.
- Compact and lightweight DX-format close-up lens. Lens Construction (Elements/Groups) - 9 elements in 7 groups
- Maximum reproduction ratio is 1.0x. Focal length is 40 mm
- Sharp images from infinity to life-size (1X), Autofocus to 64 inches
- Close-Range Correction System (CRC). Silent Wave Motor (SWM)
- Angle of view is 38 degree 50 feet. Features focus distance indicator 0.53 feet to infinity having minimum focus distance as...
3. Nikon AF-S Micro-Nikkor 60mm 1:2.8G ED
The AF-S micro Nikkor is the other popular micro lens offered by Nikon. This slightly longer than standard focal length ensures that you have a longer reach and closer focusing even from a comfortable working distance.
The lens is designed for the full-frame sensor of Nikon but that said the lens is compatible with all of Nikon’s DX-format cameras as well. This being a G lens comes with a built-in AF motor. That should autofocus it on cheaper DX systems as well. These are cameras like the D5100 and the D3100 which do not have a built-in auto-focusing motor.
The construction of the lens includes a total of 12 elements arranged in 9 groups. These include two aspherical elements as well as one extra-low dispersion element. These elements suppress all types of aberrations as well as distortions. The result is sharper colors and better contrast.
Still on the subject of construction and the internal elements of the lens. This lens comes with 9 rounded blades which ensure that the quality of the bokeh is very nice. If you want, you can produce a nice background (and foreground) blur that would obliterate anything that is in front and behind the focusing plane.
The lens also includes Nikon’s Nano crystal coating as well as Super Integrated Coating. These coatings ensure that the lens can handle flares and ghosting much better than other lenses.
Related Post: Best Nikon Tele Macro Lenses
The lens is capable of producing 1:1 or life-size reproduction of any subject to the sensor from a close focusing distance of 7.3″.
Like many of the other modern Nikkor lenses, auto-focusing on this is handled by Nikon’s Silent Wave Motor technology. This ensures quieter auto-focusing and better handling when compared to older AF technologies. The lens also includes full-time manual focusing override. Comes in handy very often as you focus on a very small subject at close distances and the camera cannot lock focus precisely where you want to.
Another feature on the lens that is handy is the internal focusing mechanism. Internal focusing ensures that the lens’ barrel length does not change when the lens focuses. This has some interesting applications as the lens is able to focus close without becoming something of a scary proposition for small living subjects.
The thing that is missing on the lens is image stabilization. You will have to shoot at a minimum shutter speed of 1/60 sec to ensure that your hand-held shots are devoid of blur.
-
- The item is made in Thailand
4. Nikon AF Micro-NIKKOR 200mm F/4D IF-ED Lens
The 200mm micro Nikkor f/4 IF-ED is a telephoto lens that is capable of focusing from a close distance of 48cm and producing 1:1 perspective of small subjects. The lens is optimized for the larger sensor size of full-frame cameras but that said it would also mount on smaller DX format DSLRs made by Nikon. Plus, the lens is also compatible with Nikon’s film cameras.
A word of caution though. This is a ‘D’ lens. Nikon’s D lenses, unlike G lenses, don’t have an auto-focusing motor on them. which means if you plan on using this lens on one of Nikon’s cheaper DX-format cameras, which don’t have a built-in AF motor, this lens will not auto-focus on them.
On a DX camera, the focal length becomes the equivalent of a 300mm lens mounted on a full-frame body. The incredible effective focal length means you could also use it as a tele-lens for birding and wildlife photography.
The aperture opens up to f/4 which means it is not as fast as some of the other lenses that we have discussed here. But that said, this is no pushover either. It is probably one of the sharpest macro lenses that you can buy for your Nikon and as such would also work as a general purpose telelens.
That said, this would not work as your typical portrait lens, even if you can stretch the definition of a standard portrait lens to include something as the 200mm. Why? Because at f/4 the bokeh isn’t as exciting as you could get with some of the other dedicated portrait lenses.
Related Post: Macro Photography Ideas
The lens’ construction includes 13 elements arranged in 8 groups. These include two extra-low dispersion elements which ensure that the lens is able to suppress chromatic aberrations better. Plus, Nikon has also provided close-range correction system which should allow the lens to perform auto-focusing better compared to other non-CRC enabled lenses.
Additionally, the lens features an internal focusing mechanism. This ensures a constant barrel length when the lens focuses. There is no full-time manual focusing override on this lens but there is a manual focusing ring. The MF ring is large and sits comfortably at the front of the barrel. It is easier to control focusing manually when needed.
The lens aperture diaphragm is made up of 9 diaphragm blades and that is what makes it possible to capture nice bokeh when shooting small subjects. That said you don’t have to always go for bokeh. You can stop the lens down and produce a large depth of field just as well.
This is a bulky well-made lens, make no mistake about it. The lens weighs 1.18 kilos making it mandatory to use a tripod when shooting. Especially, because the lens does not feature an image stabilization system.
- 200mm; F/4.0; Micro lens
- D-Series; Uses 62mm filter
- Lens not zoomable
- An optical glass developed by Nikon that is used with normal optical glass in telephoto lenses to obtain optimum correction...
Related Post: Best Macro Photography Cameras (9 Great Cams in 2018)
Third Party Choices
5. Tamron 90mm f/2.8 Di Macro VC USD
The Tamron SP 90mm f/2.8 Di Macro VC USD is a macro lens designed for the Nikon f-mount. It happens to be one of the best macro lenses for Nikon f-mount systems you can get from third party manufacturers.
It is optimized for the large 35mm (full-frame) sensor Nikon cameras (note the acronym Di). The lens also works with 35mm film cameras. Plus, it will also work DX-format cameras as well. Though, with DX-format cameras you will get the advantage of a longer effective focal length (because of the 1.5x crop factor).
DxOMark rates this lens with a score of 35 when tested with a Nikon D800E. It is a fairly high score and coming just after the Nikon AF-S VR Micro Nikkor 105mm f/2.8G IF-ED which we discussed above.
The internal construction of the lens consists of 14 elements arranged in 11 groups. These include one low dispersion element and two extra-low dispersion elements. These elements will take care of aberrations and distortions that plague fast aperture telelenses.
Auto-focusing on the lens is powered by a ring-type ultrasonic motor (Ultrasonic Silent Drive). This is quiet when focusing and is fairly accurate too. Tamron’s USD actuator technology provides faster and more precise focusing lock when using things like manual focusing override.
Additionally, to suppress flares and ghosting, especially when working in backlit situations, BBAR and eBAND coatings have also been implemented. The lens diaphragm is composed of 9 rounded blades which ensure that the lens is able to produce beautiful creamy bokeh.
But what makes this lens a fantastic piece of an optical tool is its ability to produce 1:1 perspective or 1x magnification when working from its closest focusing distance of 11.81″. Being a reasonably long lens the working distance is long enough to not scare off small living subjects like frogs, butterflies, and bees.
Additionally, this lens features Tamron’s VC (Vibration Compensation). VC on the lens is rated at 3.5 stops. It gives you the advantage to shoot at up to 3.5 stops slower shutter speed compared to what the camera’s built-in meter is telling you.
Good thing too that the lens has VC because at 90mm the lens is almost perfect as a portrait lens when mounted on a full-frame camera. You would need VC to shoot stable shots. The fact that the lens has a reasonably fast aperture and a 9 blade diaphragm means you will also get a comparable bokeh to what you would get with an 85mm f/1.8 lens.
For any lens to be called a true macro lens just being able to catch a 1:1 perspective isn’t enough. You got to be able to work in any environment. This Tamron lens comes with a series of rubber seals around the switches and rings, allowing it to remain unaffected in inclement weather, dust, and dirty environments.
- Moisture-Proof and Dust-Resistant Construction
- Durable Fluorine Coating on the front element repels water and fingerprints
- Advanced coating technology reduces flare and ghosting
- Circular aperture to achieve beautiful, rounded blur effects (bokeh)
- VC enhanced with shift compensation
6. Sigma 105mm f/2.8 EX DG OS HSM Macro Lens
DxOMark rates the Sigma slightly lower than the Tamron in terms of their DxOMark scoring system. But the Sigma is still a formidable lens to shoot macro photos with. The Sigma 105mm f/2.8 EX DG OS HSM Macro Lens is also capable of shooting 1:1 or life size reproduction of any small subject from its closest focusing distance. Which happens to be 31 cm.
The internal construction of the lens includes a total of 16 elements which are arranged in 9 groups. This includes a Special low Dispersion (SLD) element as well as a high refractive index SLD. These two elements take care of a lot of aberrations and distortions.
The focusing mechanism works in a way so that the front element of the lens does not rotate. This is useful when using variable ND filters as well as using circular polarizes.
The f/2.8 aperture is not the quickest but is bright enough for some creamy background blur. The fact that there are 9 aperture blades means you will be able to get good results.
The lens comes with image stabilization. Sigma calls it Optical Stabilization. Auto-focusing on the lens is powered by Sigma’s HSM (Hyper Sonic Motor) technology. Focus free technology has been incorporated into the lens as well. The lens will auto-focus without the manual focusing ring moving. It makes things easier as the lens has full-time manual focusing capabilities as well.
Additionally, the lens features Sigma’s proprietary OS (Optical Stabilizer) mechanism. This enables the lens to handle low light and or hand-held shooting much easily when longer shutter speeds become necessary.
There are two OS modes on the camera. The first one is designed for stabilizing all movements regardless of the direction in which the camera is moving. The second one is for assisting panning movements. This mode is basically for sports and for any action scenes where you need continuous AF when the subject is moving about. This mode is rarely used if ever in macro photography situations.
This is a well-built lens with lots of metal elements. Sigma’s lenses tend to be heavier than their counterparts and this particular lens lives up to that billing. It weighs 726 grams.
7. Sigma 180mm f/2.8 APO Macro EX DG OS HSM Lens
The Sigma 180mm is a medium tele-lens and a macro lens all built-in one large frame. This is a fantastic piece of equipment, one that is guaranteed to produce great images for years to come. This is a well-built lens no doubt about it. At 1.63 kilos it is a pretty heavy lens too.
The lens is capable of producing true macro perspective images or life-size reproduction of small subjects when working at its smallest working distance.
The lens comes with a series of features. The maximum aperture of this lens is f/2.8. Fast enough to handle low light situations and for capturing nice background blur. But not quick enough for the purpose of stopping action in low light. You can, however, use this lens as a general purpose tele-lens as it has an optical image stabilizer feature.
The internal construction of the lens includes 19 elements arranged in 14 groups. These include 3 FLD glass elements which ensure that color fringing is suppressed. The lens aperture diaphragm consists of a total of 9 blades which makes sure that the lens is able to produce nice creamy background (and foreground) blur when you shoot something wide open.
On the outside, the lens has several toggle switches. One works as the focus delimiter switch. The second works as your AF / MF switch and the third is for controlling OIS. Speaking of the optical image stabilizer, it is rated at 4 stops. As such this allows you to use a shutter speed up to 4 stops slower than what the camera’s metering system tells you. There are three OS modes. One is for turning off OIS completely, the other two is the standard OIS mode and the panning assist mode.
Auto-focusing mechanism on the lens is powered by a Hyper Sonic Motor (HSM). The focusing mechanism comes with a floating architecture which ensures field curvature and other forms of aberrations are also suppressed. In real life situations even when the lighting is poor the lens does manage to find focus. It only tends to stutter and then go into a focus hunting frenzy when the light gets really poor and or when you have very little contrast to lock on to.
In good light and when there is some reasonable amount of contrast to lock on to, the lens never disappoints. The best thing about the lens is that focusing (in good light) is never all the way to either extremes. It always tends to find the point of highest contrast on the first attempt.
Additionally, there is a super multi-layer coating as well. This coating ensures that the lens is able to suppress ghosting and flares, especially when working in backlit situations. Evidently, this produces better contrast and more saturated colors.
Barring the weight factor and the price (which is slightly on the higher side) there is little to complain about this lens. One of the best macro lenses for Nikon f-mount systems.
8. Tamron 90mm f/2.8 SP AF Di Macro Lens
Both these 90mm Tamron lenses (the VC and the non-VC) are excellent for macro photography. The Canon versions of both these Tamron lenses had been included in our previous discussion about the best macro lenses for Canon. So, no wonder that the Nikon version of both these lenses will be included on this list as well.
There are a few acronyms/abbreviations mentioned on the list. SP stands for Superior Performance, a mark that says this is one of Tamron’s better quality lenses. Di is an acronym that’s used on Tamron’s special lenses which have been engineered to work not only on traditional film SLR systems but also on digital SLR systems.
The 1:1 perspective capable on the lens at a minimum focusing distance of 11.4″ allows you to shoot life-size reproduction of any small subjects.
That said, the lens has been remodeled to incorporate an AF motor inside to ensure that it works with all cheaper / entry level Nikon DSLRs which don’t have a built-in AF motor on them.
This particular lens, however, is the non-stabilized version. If you can’t work without some form of image stabilization then this lens is not for you. Make sure you get the other (VC) lens and not this one.
F/2.8 is a fast aperture, may not so much when shooting in very dark conditions, but when shooting in reasonable lighting you would be able to shoot sharp photos with good color contrast.
The construction of the lens includes a total of 10 elements arranged in 9 groups. The lens diaphragm consists of a total of 9 diaphragm blades. That should give you an excellent background and foreground blur. This is suitable for completely melting away anything that may be distracting the composing or at least not adding anything to the composition.
Finally, a word on the build quality and weight of the lens. The overall weight of the lens is 405 grams. That makes it very lightweight and easy to use for considerable periods of time, especially when shooting handheld.
9. Sigma 150mm f/2.8 EX DG OS HSM APO Macro Lens (Nikon)
The Sigma 150mm f/2.8 EX DG OS HSM APO Macro Lens was also included on the list for the best macro lens for Canon. This is a really good lens and thus, also deserves a place on this list. This is a true macro lens capable of producing 1:1 perspective.
The lens consists of 19 elements arranged in 13 groups. It consists of 3 Special low dispersion glass elements. Additionally, the lens comes with a special coating that takes care of ghosting and flares.
Auto-focusing on the lens is powered by Sigma’s Hypersonic Motor technology. In real life situations especially when shooting in bright outdoor situations, the lens rarely misses focus. Auto-focusing speed is quite quick. The only time auto-focusing tends to miss or hunt is in low light situations. The manual focusing ring is well dampened too and very reassuring when turned.
A floating internal focusing mechanism ensures that the lens is able to focus quite accurately at all distances. Additionally, the internal focusing mechanism ensures that the lens’ barrel length does not increase during focusing. Plus, the front element of the lens does not rotate when focusing. This is useful for photographers when using variable neutral density filters as well as circular polarizers.
The lens’ optical image stabilization is rated to up to four stops. That means you can use up to four stops of image stabilization when shooting hand-held as against the shutter speed that the camera’s built-in metering system suggests.
Shooting Portraits
Ideally, a 150mm lens is a bit too long for portraits. Normally, photographers would prefer the range between 85mm to 135mm (for full-frame cameras) to shoot portraits with. In that sense, this is a bit too long. Having said that, if you don’t have any other lens on you, the Sigma 150mm f/2.8 macro isn’t a bad choice as a portrait lens. You have to shoot from a slightly longer distance.
Still on the subject of using the Sigma as your portrait lens. The quality of bokeh, something you would consider no doubt for your portrait lens, isn’t so good. The blurry background is not rounded at all. Instead, they are sort of blended together.
10. Irix 150mm f/2.8 Macro 1:1 (F mount)
Last year Irix launched the 150mm f/2.8 Macro lens for the popular full-frame mounts. The lens is capable of producing 1:1 magnification and has a minimum focusing distance of 1.1”. The lens is a manual focusing design and comes with a focus locking mechanism.
One of the salient features of the lens is 0 distortion. Something that the company claims. In actuality there is a minuscule amount of difference. But that is not going to be a deal breaker and would not matter unless you are a nitpicker. The second major selling point is the weather sealed construction. The other features include an 11-blade rounded aperture diaphragm.
11. Zeiss Milvus 100mm f/2M ZF.2 Macro (F mount)
This is a brilliant piece of optical technology by one of the finest lens manufacturers in the business. No wonder it finds a place on this list of the best Nikon macro lenses. That said the Zeiss Milvus has one important drawback and that is the lens is unable to reproduce 1:1 magnification.
Something that a majority of the other lenses on this list do. Plus another disadvantage of this lens is it does not have auto-focusing motors. If autofocusing is a big thing for you then you will struggle with this lens. It might not be a good buy for you. But on the other hand if build quality, superb optical quality and negligible distortion is your need and you are not too finicky about auto-focusing, the Zeiss Milvus 100mm will give you’re a lot of joy.
Note: As an Amazon Associate we earn from qualifying purchases. Certain content that appears on PhotoWorkout.com comes from Amazon. This content is provided ‘as is’ and is subject to change or removal at any time. |
Q:
Why calling function with nullPtr does not crash my application?
I don't understand something in C++ - I create some pointer on class and set it to null.
Now I call some function with this null pointer and the function succeeds. Why doesn't it crash ?
class Entity
{
public:
void Print() const
{
std::cout << "Print" << std::endl;
}
};
int main()
{
Entity* ptr = nullptr;
Entity& _ref = *ptr; // No crash here - I expected a null pointer exception
_ref->Print();
}
A:
This an example of UB. It may or may not crash. But it is wrong code. UB means anything is possible. Although as other posts suggest, this simple snippet does not crash on many platforms.
A:
This is a common thing in C++, the function is not part of the instance, but part of the class definition.
If you tried to access this in the function then you would have a crash.
As @YSC mentioned below, this is considered undefined behavior, and you should not assume this will work. but it will mostly work and i heard this is even asked in C++ interviews questions.
|
I am pleased to open this debate under your chairmanship, Mr Sanders. I welcome the Minister to her place—I will be posing a number of questions for her at the end of my remarks.
I am able to bring this subject here for debate because of a remarkable woman, Claire Eades, and two others, Pauline Saunders and Dianna Goodwin. That trio of schoolteacher, artist and retired magistrate have shown that those whose cavity wall insulation goes wrong can find it near impossible to obtain swift and effective redress. Quite recently, they set up the Cavity Wall Insulation Victims Alliance, and I have drawn on far more cases from the association than I can report today. Other reported cases are included in the briefing pack for the debate compiled by the Library. I know that hon. Members of all parties will contribute their own constituency cases.
Claire Eades’s parents are constituents of mine. Their home in Southampton suffered badly from wrongly installed cavity wall insulation. Claire ultimately achieved a reasonable settlement after a determined campaign, but her parents are not the only ones affected. Their case exposes the problems with the supposedly independent insurance body, the Cavity Insulation Guarantee Agency, as well as bad industry practice and total inadequacies in the regulation provided by Government.
The market in cavity wall insulation is worth £700 million to £800 million a year. It has been boosted by Government policy, with some direct Government funding; however, most CWI has been funded by energy companies, which have been required to invest in energy conservation measures through a range of schemes, such as the carbon emissions reduction target, the community energy saving programme and the energy companies obligation. All the schemes differ in some respects but include insulation paid for by energy companies meeting Government obligations. Energy companies that fail to do so can face fines.
Many householders who responded to cold calls, e-mails and adverts and had cavity wall insulation installed had no idea that an energy company was funding that installation. Advertising typically refers to a 25-year guarantee and names CIGA. Doorstep visits and telephone calls typically describe the schemes, wrongly, as Government-backed or Government-funded. I have transcripts of a couple of phone conversations with such salesmen, one of whom, when asked who funded the cavity wall insulation, said:
“Erm, I think it’s the government, and also your British Gas, your Southern Electric, and the other companies. Sorry, it’s only my first week but obviously that’s why they’ve already paid for it and…it’s free on behalf of the government.”
In another case, this time when discussing guarantees, the salesman said:
“Yeah, there’s only a couple of companies which government approves, they’ll give you a 25 year guarantee with the government.”
The transcript goes on:
“‘Sorry, if I had it done would I get a government guarantee?’
‘For 25 years.’
‘What, the government guarantees it?’
‘Yeah, because the government fund it. They don’t fund it, it’s from the tax they’ve taken from you, so they fund it in that way.’”
“The Cavity Insulation Guarantee Agency…was set up by the government to provide householders with an independent, uniform and dependable guarantee. This is a 25-year guarantee that is independent of the installer who insulates your property.”
But CIGA was not set up by the Government, nor is it independent of the installers. When things go wrong, the Government are the first to deny any responsibility or involvement. Government policy is driving much of the market, but the Government are not taking the measures needed to ensure high standards of installation or redress.
The Government have had plenty of warnings. The Office of Fair Trading reported in 2012 that failure to install properly would undermine Department of Energy and Climate Change targets for energy reduction. It recommended that DECC should ensure that there was a single body ensuring effective independent monitoring of installers and installation quality. All there seems to be is a licence; Dianna Goodwin of the CWIVA bought one online for £75.
The Minister is advised by the Green Deal Consumer Protection Forum. At its meeting on 26 June 2014, Ofgem reported:
“Ofgem has noted that there are suspected cases of fraud within the ECO scheme, for example around the installations for hard-to-treat cavity walls. Ofgem was informed of anecdotal evidence of systematic abuse of the technical rules, and investigated. It found that a number of installations were done improperly…Ofgem reported that one of the main difficulties it has is that it cannot engage with the supply chain”— that is, the installers—
“as its agreement is with energy suppliers.”
At the same meeting the Energy Saving Advice Service reported that it receives about 30 complaints about ECO per week. There have been different schemes, but those elements—poor installation, abuse of the rules, and the inability of Ofgem to act—appear to run through all of them.
Following a cold call, a Mark Group survey of the Eades property took place on 10 June 2010. CWI can go wrong—badly installed or installed in an inappropriate property, it can cause damp penetration and condensation. In 2011 Which? asked eight companies to assess a clearly unsuitable house for CWI. All eight surveys recommended installing CWI. Funnily enough, four were carried out by that same company, the Mark Group, and three by the same person. All four surveys provided different prices even though they recommended the same work and materials.
The Eades’ property is less than 1 mile from the sea and according to an independent survey conducted last year is
“exposed to severe wind driven rain”.
Cavity wall insulation was installed in the property on 10 November 2010. By 4 February 2011, the house had a strong musty smell and significant condensation, and black mould was beginning to form. It is common for problems with CWI to appear more than a year after installation, yet the only routine independent inspection of properties takes place a few weeks after installation. It required by Ofgem, but it is not to check whether there are damp problems; it looks only at whether energy reduction targets are being met. I cannot know—nor can the Minister—whether the problems I am raising are isolated or the tip of an iceberg.
On 3 February 2012, the Eades sent a letter by recorded delivery to the Mark Group reporting severe condensation. No response was received to the complaint. That lack of response appears to be standard across the industry. In December 2013, two years after the original installation, there was significant water ingress and damp patches were appearing along the length of the south-west-facing wall upstairs and downstairs and from top to bottom of the wall.
A quick look at the part of the Review Centre website relating to the Mark Group shows pages of complaints. For example:
“The installation team from Mark group in their wisdom filled all the air bricks in the property with silicone sealant…causing huge problems with damp. I have complained but had no response.”
Another reads,
“from my experience either I am the most unlucky house-holder in the country, if this is a one-off event, or the whole cavity idea is a big sham which should be investigated by someone other than the industry itself.”
A further complaint reads:
“Do not use Mark group. They filled our walls with non-compliant CWI, as open to the wind driven rain, our properties have been ruined...CIGA’s guarantee up to now is worthless”.
A further complaint says:
“It is now nearly 6 months since they damaged my home and I am no nearer to resolving the issue. They seem to be using delaying tactics in the hope that I will give up.”
And so it goes on, for page after page.
I do not know, however, whether the Mark Group is worse than any other company. It is still approved to do this work. My own encounter with the company was not good, as it made a fatuous attempt to threaten legal action after I retweeted a customer complaint—something I have never come across in 22 years as a Member of Parliament. [Interruption.]
I have some satisfaction in saying that it was at that moment that I decided that I should try to secure a parliamentary debate on the issue.
The Eades eventually contacted the Mark Group via its website. On 9 January 2014, Mr Lillywhite of the Mark Group inspected the property and offered to extract the cavity wall insulation for £2,000. Such double paydays for companies that install insulation wrongly and then charge to take it out again seem to be endemic in the industry. On 21 January, Claire Eades put a review on the Review Centre website, as the report of the inspection in January had still not been received. After further chasing, the Mark Group report was sent by Nathan Dunham. The report stated that the CWI was correctly installed and that the property was at fault. The response that nothing is wrong with the cavity wall insulation and that the damp is caused by something else is standard across the industry. Sometimes the property is blamed, and sometimes the occupants’ lifestyle is blamed, even though the same people have been living in these properties for years without suffering damp problems. It is simply a disreputable tactic. Many of those who take up CWI are older. They live in properties they own, and fuel bills are a major part of their expenditure. Perhaps the industry thinks they are less likely to complain.
On 31 January 2014, Claire Eades gave Mr Dunham of Mark Group a week to supply a date for a CIGA visit. CIGA was established in 1995. Ministers refer to it as independent; indeed, a letter from the Minister in December said:
“CIGA is an independent body” and
“an organisation that will clearly be up to resolving issues relating to cavity wall insulation”.
I must ask the Minister what advice she was acting on when she signed that letter.
How independent is CIGA? On 3 November 2014, the directors were: Jeremy Robson, a director of the British Board of Agrément, the National Insulation Association and InstaGroup; John Sinfield, managing director of Knauf, which makes insulation materials; John Card, a director of Domestic and General Insulation Ltd; Brendan McCrea, a director of Abbey Insulation and Warmfill; Walter French, a director of the National Insulation Association; and Ian Tebb, a director of Polypearl Ltd and Tebway Ltd. Michael Cottingham was a director of CIGA between 2008 and 2009 and the managing director of Mark Group from 1991 to 2009. How can an organisation led almost entirely by directors of insulation companies be called independent?
I will return to the case of Pauline Saunders, but I want to read an e-mail she sent about a south Wales neighbour:
“I have just called on a very vulnerable 82 year old widow who unfortunately is in the position that her cavity wall installer has gone out of business and CIGA are not responding to any correspondence regarding this lady’s situation. I have just visited this lady and found her up to her knees in shredded wall paper”— it was peeling off because of the damp—
“that she is scraping off the wall herself in an effort to save money”.
Even if vulnerable people complain, therefore, they are not guaranteed a reply.
After chasing, a CIGA inspection was carried out on the Eades’ property by a Chris Cuss on 13 February. On 20 February, the Eades wrote to CIGA to complain of a lack of interest on the part of the inspector. On 25 February, a short summary was sent to the Eades stating that the property was at fault, with no mention of faulty installation. The family therefore asked Mark Group for a copy of the original inspection report, which, had it been done properly, would have shown any failures in the property.
On 26 February, Mr and Mrs Eades wrote to John Campbell at CIGA. In a separate case, Dianna Goodwin of Milford on Sea had been copied into an internal e-mail from Mr Campbell, which referred to her, saying:
“She has far too much time on her hands and nothing better to do.”
In that case too, CIGA claimed there was no evidence that the CWI had
“caused or contributed to any issues with water penetration”.
That e-mail was sent from an organisation that, in its briefing to Members of Parliament for today’s debate, says:
“If something does go wrong, CIGA is at hand to put things right for consumers. It exists to protect consumers; they are our number one priority”.
The full CIGA report was never sent to Mr and Mrs Eades, but Claire Eades asked Mark Group for its report and the full CIGA report. The full CIGA report was then sent, and it said:
“the installation of CWI has NOT been completed in compliance with system designer and BBA specifications, the drilling pattern is non-compliant omitting an area of the original external wall within the rear extension”.
The full report would never have been made available to Mr and Mrs Eades had it not been for their daughter’s persistence.
CIGA colludes with installers to suppress evidence of failure and mis-installation. In the Eades case, it concluded, on the basis of no evidence, that CWI had exacerbated a concern regarding damp. It failed to acknowledge that the original Mark Group survey did not identify any pre-existing dampness.
CIGA claims there are historical problems in homes that have always been dry. Mrs Goodwin of Portsmouth was told her damp was caused by property defects and “lifestyle condensation”, even though her home had never previously suffered from damp. Chris Stillwell of Weymouth says:
“I have been left with damp and damaged walls....my flat is uninhabitable and has been ruined…CIGA who guarantee CWI keep trying to fob us off, even though their report states that the insulation used is now non-compliant”.
On 17 March last year, Lloyds, the household insurer for the Eades property, said the damp and water ingress were due to faulty CWI. However, the Eades still faced the challenge of getting work done, because having the CWI installed had invalidated their household insurance policy. They raised their plight with DECC, which said, “Go to Ofgem.” It also said:
“under the 25 year guarantee there should be no cost to the householder”.
DECC must be aware that CIGA conspires to keep details of inspection reports from householders and produces reports that are totally inadequate.
The Eades took their plight to Ofgem. Ofgem took a month to reply and referred them to Citizens Advice. They raised their plight with trading standards, which said, “Go to the citizens advice bureau.” They went to the CAB, which said it could not help and suggested the couple go to trading standards. They finally went back to DECC, asking who was responsible. The DECC reply was very clear: whoever it was, it certainly was not going to be DECC. DECC said:
“The contractual arrangements between energy supplier and third parties are not within our remit”.
For the Eades, this was the first time the involvement of an energy supplier had been mentioned.
A couple of weeks letter, DECC offered further advice: Mr and Mrs Eades—an elderly couple—should get a solicitor. However, on 23 May, there was a breakthrough. Ofgem had managed to establish that E.ON had funded the installation. Claire Eades told me that Dani Hickman of E.ON corresponded directly and appropriately with Mr and Mrs Eades. The involvement of the energy supplier was critical.
The Cavity Wall Insulation Victims Alliance has been in contact with more than 40 victims, but the Eades case is the only one in which the link with the energy company has been established. Ofgem does not hold address-level information consistently and, under CERT and CESP, there was no obligation for suppliers and installers to submit it. Despite that, the Minister of State wrote to me on 9 July, saying:
“Should it be the case that this work was undertaken under CERT, then Ms Eades or her parents may wish to contact the relevant energy supplier if they are unable to resolve the matter with the installer”.
In this case, Ofgem did trace the energy supplier for the Eades, but whoever drafted that letter for the Minister of State must have been aware that it would have been quite impossible in most cases under CERT to trace the energy supplier.
E.ON’s involvement led to an inspection by Knauf. The inspection recommended that the insulation be taken out of the south-west-facing wall due to voids. The internal walls in the extension should also have been drilled out for installation, and there were other failings. The Knauf report was never sent to Mr and Mrs Eades, and it was not intended for them. It was passed to them only by E.ON, which, acting on their behalf, demanded it from the Mark Group.
On 20 June, E.ON commissioned Green Deal Resourcing to carry out thermal imaging, which showed voids. The property is exposed to severe wind-driven rain. The insulation is facilitating the transfer of moisture across the cavity.
Having got that further independent report, the Eades complained to the British Board of Agrément. The BBA is supposed to accredit installers and materials, but it shares directors with CIGA. This is a very cosy network. The United Kingdom Accreditation Service, which is responsible for accrediting the BBA, confirms that householders have no right to see BBA reports on their properties.
On 21 July, the BBA inspected the property. Its report was never sent to Mr and Mrs Eades; it was sent to the Mark Group. It started, “Hi Nathan.” It continued:
“The system hasn’t been installed in compliance with the BBA issued certificate and should be extracted”.
Again, the Eades had no right to see that report. They got it only because E.ON was involved and passed it on to them.
To return to the case of Pauline Saunders of Newport, she finally received £1,750, and the Mark Group removed the fill. The trigger was a BBA report on the property that was sent to her in error. As a result, she was able to establish that it said:
“the property was and is unsuitable for cavity wall insulation and should not have been insulated”.
Without that report, which was intended only for the eyes of the installers or CIGA, she would not have received a payout.
In the end, the Mark Group and its loss adjuster, while still denying responsibility, paid the Eades about £11,000. Let us remember that the Mark Group originally wanted to charge £2,000 to remove the insulation from the property, having already been paid by E.ON for putting it in. How many people will there be who have not managed to pursue things that far? One cause of offence is the fact that even when settlements are achieved, installers still routinely deny responsibility and describe any action as a good will payment. Mr and Mrs Eades had their work done by a company that only does removal. On the occasions when CIGA will pay for extraction, its chosen extraction companies are Dyson Energy Services and InstaGroup, both of which share directors with it, so even when CIGA is finally forced to act, it seems that companies owned by its directors are the ones paid to do the work.
In a note sent to Members of Parliament, CIGA says:
“If something does go wrong, CIGA is at hand to put things right for consumers”.
I can give no credence to that claim. It says:
“If there is a problem with the workmanship or materials of an installation, we will ensure the installer put things right.”
As I have shown, CIGA takes active steps to avoid installers having to put things right. It says that, with regard to the 11,675 concerns reported, it has worked with installers to resolve 80% of cases; in 20% of them it covered the cost of work to the value of more than £2 million. Well, 80% plus 20% is 100%: that is all the cases dealt with. So how come so many people say they cannot get their problems resolved? There is something dodgy.
The obvious question is whether all those householders would agree that the resolution has been satisfactory, or whether they have just given up, accepted whatever they can take, or paid to put things right themselves. Who knows? There is no independent oversight of CIGA. CIGA is judge and jury in its own case, and it is run by the people who cause the problem. It says in its briefing to MPs that it will appoint a consumer champion. It is a bit late in the day, and it is hard to give credence to that. I am pleased that since today’s debate was announced, some of those involved in cases taken up by the CWIVA have had better offers. However, we cannot allow that last-minute action to let CIGA off the hook.
I have several requests for the Minister. I would like a full review of how the industry and CIGA operate. I want her to make a commitment to establishing genuinely independent oversight of the compensation arrangements. I ask her to change the regulatory regime so that the link between each energy company and each property is transparent and registered. Also, crucially, I would like every effort to be made to find out what additional historical information can be established. We must not just rectify problems for the future; we must deal with historical cases. I want the Minister to establish an independent assessment of properties at least one or two years after installation. That is the only way we will be able to understand the true scale of dampness caused by CWI. I also want her to introduce effective regulation of initial sales.
I have no doubt that there is fear in DECC that acknowledging the problems would discredit a key energy conservation policy, but the real danger to the credibility of the energy conservation programme lies in hushing the matter up. Many victims now question the whole idea of cavity wall insulation. Jeff Howell, the respected building correspondent of The Daily Telegraph, believes that all retrofit CWI is likely to cause problems. Is that true? I certainly hope not, and many organisations take a different view, but unless the Minister acts now, those doubts can only grow. We should not allow that to happen. We need an honest appraisal of the technology—where it works and where it does not—and we need effective redress for the victims.
It is a pleasure to serve under your chairmanship this morning, Mr Sanders. I am grateful to Mr Denham, for securing this important debate. The tale I have to tell today is not quite as dramatic as the one he told. Today’s debate of course follows on from a similar one obtained by Hywel Williams in October.
According to Department for Energy and Climate Change estimates, there are about 690,000 remaining “easy to treat” cavity walls in Britain, not including those in exposed locations, or with other issues such as narrow cavities and wall faults. If those were all insulated, the energy bill saving would be about £100 million a year, and the carbon dioxide saving would be about 450,000 tonnes a year. That would be the same CO2 saving as taking about 180,000 cars off the road. We should be clear, therefore, that cavity wall insulation is, on the whole, a good thing, when it is done at the right time, in the right place, in the right properties, by the right people.
Encouragingly, DECC statistics indicate that since 2009 the number of CWI installations has hugely increased across the UK. In 2012-13, in East Hampshire alone, there were 4,986 cavity wall installations, which is welcome—as long, of course, as they were done properly. However, as we have heard this morning and no doubt will again, there are times and places where the treatment is not suitable. For homes in an unsheltered position or exposed to severe wind-driven rain, or whose external walls are poorly built or maintained—with, for example, cracks in the brickwork or rendering—cavity wall insulation can clearly be a liability, as it may attract severe damp.
It is in just such a location that a constituent of mine in Meon Valley is currently experiencing terrible damp problems in his home, subsequent to the installation of cavity wall insulation. In the past several years dampness has begun to occur inside his south-facing walls, which, as he lives on top of a hill, are frequently exposed to driving rain. He tried several remedial measures without contacting the insulation company, none of which, unfortunately, solved the problem, and he was forced to conclude a year or so ago that his cavity wall insulation, installed in 2006, was the likely culprit. He brought the matter to the installer’s attention late last year. Since then, like, I suspect, many people attending the debate, has been in dispute with the installer, and has requested the removal of the insulation.
This is not a fairy tale, but I am delighted to report that last week, after the debate and my intention to take part was announced, the Cavity Insulation Guarantee Agency, with which the installer has an agreement to provide a 25 year guarantee, has, after a further inspection of the property, recommended two options to resolve the problem. Hip, hip hooray! What a marvellous thing that is. One option is extraction in the affected area of wall, with removal of all insulation to minimise the risk of further problems. Alternatively, the work could be redone with a decent damp seal membrane and/or a waterproofer called Haloseal. That is a magical thing, and I am delighted. I have also been informed that CIGA will reimburse my constituent for considerable costs incurred in remedying damp damage and for machinery that he had to bring into the house to extract water from the atmosphere.
That is all welcome and I am grateful that CIGA has reacted so positively to the case, but, despite the assurances, my constituent has very little trust in the industry. The fact is that the insulation should never have been installed as it was in the first place. Obviously, despite fairly well defined circumstances in which cavity wall insulation is clearly not appropriate, it is nevertheless routinely still being installed. According to a DECC review in 2012, there are between 215,000 and 245,000 cavity-walled houses in the UK in an exposed location, which could make them inappropriate for cavity wall insulation. That represents 1% to 1.5% of cavity-walled houses in the UK, so it is by no means an insignificant problem. I understand that my constituency is described as a category 3 area, which means that it has “severe” exposure to driving rain, and that therefore cavity wall insulation may be unsuitable for some properties. That sounds like almost everywhere in the UK to me.
Surely it should not be too difficult for installers to get things right. The CIGA website says that registered installers are required to carry out a thorough pre-installation inspection of the property; so problems should really be ironed out at that stage. It seems that, as most eloquently described by the right hon. Member for Southampton, Itchen, there is a need for either much more policing of the scheme or much more rigorous training.
It is a pleasure to speak today under your chairmanship, Mr Sanders. I congratulate Mr Denham on securing the debate, which follows on from my previous half-hour debate. I am glad that other hon. Members are taking an interest in the issue. This has been a matter of concern to my constituents for quite some time. I have come across many cases that I will refer to, although I will not go into in as much detail as the right hon. Gentleman did. In such cases, cavity wall insulation has been installed when it obviously should not have been owing to heavy rainfall and the prevailing wind in west Wales.
In fact, my constituency is a category 4 area. George Hollingbery referred to his area as being in category 3, but much of west Wales is category 4. The map is quite startling: west Wales is coloured deepest blue and that is not a reference to its political leanings. Obviously there are problems there.
As well as having heavy rainfall and being in a category 4 area, we also have many buildings with exterior walls in poor condition, including many older buildings and former council houses that have cracked rendering and, in some cases, rendering that has fallen off. In the case of one former council house—I think it is located at about 1,200 feet, facing the prevailing wind—the brickwork can be seen because large chunks of the rendering have fallen off, but cavity wall insulation was put in. Pebbledash is the common form of rendering in my area. It is effective owing to the level of rainfall, but, as we know, it does crack and I am concerned that, too often, that was not properly taken into account.
My concerns include the assessment of suitability for cavity wall insulation and whether it should be installed at all in wet and windy Wales. I have also looked at the CIGA paper provided for the debate, which has an interesting paragraph:
“As part of the suite of technical guidance published by CIGA, there are strict criteria for assessing the suitability of a particular home for cavity wall insulation. Each home must be fully assessed by a BBA registered assessor before any work takes place, and if cavity wall insulation isn’t the right way forward then the surveyor will tell you.”
That is for a house. I assume that the British Board of Agrément-registered assessors may also look at maps. Anyone looking at the map of my area and large parts of Wales will see, as I said, that it is coloured deepest blue, so they should ask whether cavity wall insulation should be installed at all in any house in the area.
I am also concerned about the standard of workmanship, which I will refer to later on. People have had problems because while I am sure that, if properly installed, cavity wall insulation is very effective indeed, it must be properly installed. I am also concerned about quality assurance and the arrangements for remedial work and the industry guarantee scheme.
I am also concerned that, in particular, the people who had cavity wall insulation installed believed that that was a desirable, appropriate and trouble-free course of action. They were reassured because, so they thought, it was a Government-backed scheme. How could it be wrong? The right hon. Member for Southampton, Itchen referred to that earlier on. I know that the Government are not directly responsible, but that is the perception, so it is both the Government and the enterprise of installation that face damage to their reputation.
I referred to my debate in Westminster Hall on 29 October when I discussed these matters and I do not intend to rerun that speech, but some points bear restating. I talked about assessments and referred to the Office of Fair Trading’s report, which states:
“Consumer magazine Which?...invited eight companies to assess”— we know what the outcome was. I am glad that there has been other media interest from both broadcast and print journalists.
I am concerned about workmanship. Apart from cases where CWI has led directly to water penetration, I have also been told of those where it has been installed badly, with areas missing, which has led to cold spots, condensation and subsequent fungal growth. Even when it is proper to install it in a house, there can be problems.
On remedial work, some installers have accepted liability. I have had good relations with one energy company, British Gas, which has taken an interest and acted in certain cases. In some cases installers have accepted liability and returned to redo the work, but the householders are still not satisfied. There is a case that would be laughable if it were not so sad. An elderly lady called me to come to see her former council house. She had had remedial work done on her kitchen wall, but that had not been successful and the damp was back above the window. The case was straightforward, but what stood out for me was that, as I approached the house, I could see the remedial handiwork. The pebbledash rendering had been badly patched, so areas of about 1.5 square feet had no pebbles at all—that could be seen from across the road. The plasterer had achieved something like the appearance of pebbledash from afar by making indentations with his fingers, such was the quality of the remedial work.
“if poor installation causes problems with damp, these may not become evident until a year or more after installation.”
That is pertinent, given what the right hon. Gentleman said. We need inspections much later on, when problems may have developed. The report continues:
“Monitoring, which is typically done in the weeks following installation, cannot identify these longer-term problems…In relation to regulatory monitoring, Ofgem requires the energy suppliers to inspect 5% of installations and provide a summary of these inspections”.
In the previous debate, I asked whether 5% was sufficient—that is only one in 20. Clearly the review system is not working.
I have come across so many cases in one small town, Caernarfon. I told a few people that I was holding a meeting about this matter in a week’s time and, essentially through word of mouth, about 30 people turned up. It strikes me that the problems are more widespread than CIGA concedes. I think it says such problems affect 2% of the 6 million installations, which must be about 12,000 cases. I am sure that there are more than that.
The industry guarantee scheme has worked in some cases, but other constituents think that it operates at such a high bar that proper redress is prevented in legitimate cases. Both the right hon. Member for Southampton, Itchen and the hon. Member for Meon Valley made the important point that some of the people who have been afflicted with these problems are elderly or infirm, so they will not be chasing after fancy lawyers because they cannot afford that. They are also not familiar with negotiating their way through officialese. They are fundamentally dissatisfied with the process, but they see no form of redress available to them.
I have a Welsh-national point. CIGA serves my intensely Welsh-speaking constituency and other such constituencies throughout Wales, but, disappointingly, there is not a word of Welsh on its website or in its literature. Other organisations, including commercial organisations, use
Welsh as a matter of course and good practice to reach out to customers, rather than not using it and per se shutting them out.
As I said in the previous debate, the name of the local campaign in Caernarfon is “Waliau Du”, which means “Black Walls”, because unfortunately that is what happens: people’s walls turn black. Constituents have complained that the growth of mould has led to breathing difficulties, illness and the worsening of children’s asthma. People also suffer long-term worry about what will happen to their homes and the possible costs of repair. They might not be able to afford such repairs or to clamber into attics to see what is happening and such long-term worry has an effect on physical health.
My constituents subscribed to what they thought—rightly or wrongly—was a straightforward Government scheme. As we have already heard, some were told it was that by installers while others assumed that, as the Government were funding the installation—or so they thought—the system was safe and effective and the installers were operating to an appropriate standard of practice.
The OFT’s 2012 report noted that some people assumed that the installers’ practice was properly regulated and inspected and that appropriate quality assurance measures were in place. Those people feel let down. I believe that somebody—albeit an ill-defined somebody—should take responsibility, and that is what they feel.
My final point supports the points that were made by the right hon. Member for Southampton, Itchen—I support the questions he asked and the points that he raised. I believe that the matter warrants not only short-term remedial action for the people who are suffering, but a further comprehensive review, focusing on the problems that have become apparent over the years and that have been addressed this morning. I think we need to look at this across the piece. It should not be up to individual householders, who would find it very difficult or impossible to take their cases forward. We need a comprehensive review, because at the very least, there is reputational damage as far as the whole idea of cavity wall insulation is concerned, and for energy conservation in general, which is something that we all support, there is also the danger of reputational damage—let alone the damage to the reputation of this and previous Governments. That review should be instituted as soon as possible.
It is a pleasure to serve under your chairmanship, Mr Sanders. I begin by thanking Mr Denham for raising the matter in the way in which he has, and I agree with his assessment of the situation and his request for action. I also thank my hon. Friend George Hollingbery and Hywel Williams for their contributions, which seem to chime with the experiences of my constituents. I will set those out for the House and the Minister, along with my own concerns that, as other colleagues have mentioned, there may be more to this than meets the eye. That is the most worrying thing about it. Here is something that is designed to assist people, keep them warm and protect their houses, but it is being handled in a manner that undermines all the principles behind it and is leaving people victimised and feeling that they have had no benefit whatever.
My constituents, Mr and Mrs Haley, brought their case to me. I will be as brief as I can, but it is important to put some of these matters on record, because they fill out what has been said. In my view, they also add significantly to the demand to look into the industry, because if so many cases are cropping up that have common elements, there is a problem.
My constituents had their cavity wall insulation, if it can be termed that, installed in October 2008. The property had been inspected by Eaga Home Services—now Carillion Energy Services—which unremarkably came to the conclusion that cavity wall insulation would suit and benefit the house, and the work was done. On 19 November 2008, a guarantee was issued to say that the work had been carried out satisfactorily. However, the workmen were only at the property for 50 minutes—they said that they could not get down the side of Mr and Mrs Haley’s house because of the dining room extension.
In January 2013, after problems with mould and everything else, my constituents contacted CIGA to say that they were concerned about the amount of mould growth in their house. There had been no problem for the 25 years in which my constituents had lived at their property, but since the cavity wall insulation had been carried out, mould had been growing on the walls and ceilings and there was condensation in the sealed unit double glazing.
A letter came from Carillion to say that it would investigate and resolve the matter. My constituents tell me that in March 2013, the service delivery manager attended
“our property and made a cursory inspection. It was obvious at the time that he was not listening to anything we said to him. He said he didn’t know what was causing the mould growth but it wasn’t due to the cavity wall insulation and there had actually been very little such insulation carried out in our house. This was surprising to us as we were not aware so little work had been done.”
On 19 April, there was a letter from Carillion denying any responsibility.
I have a very thick file of papers here, and the exchange that I have just detailed is the first six to eight pages of it. The rest of it—I am sorry that listeners on radio cannot benefit from seeing it—relates to the two years following in which the matter has not yet been resolved. It is a story of evasion and an inability to act, and of letters going unanswered and e-mails not being cared about. However, all in all, it is about what appears to be a relationship between those providing the service and those supposed to be providing the guarantee to ensure that actually, nothing gets done. All our experience as MPs tells us that people fight for so long, then it gets too much and they give up. We have all seen evidence of agencies supposedly acting for the public, and indeed providers themselves, simply making it impossible for people to go on. People reach a point where they have had enough, and if it were not for individuals such as my constituents and others who have been mentioned today, I suspect that the problem would remain buried. The concern that the Minister and the Department should have is: how many more? How many more people have not been able to go through and stick with their case in order to see it resolved?
Let me quote one or two important things. When CIGA first responded to the concerns in January 2013, straight up, it gave the assurance:
“As the holder of a CIGA Guarantee, you have the assurance that any defects relating to materials or workmanship will be resolved in accordance with the terms of the Guarantee”— not worth the photocopied paper it is written on. Carillion’s response, which I mentioned, read as follows:
“Following the issues which you have raised regarding the condensation at your home, we arranged for the service delivery manager…to attend and assess the concerns you have. The service delivery manager has confirmed that the issues which you are experiencing are not as a result of the cavity wall insulation work carried out at your home. In his opinion”— the opinion of those who put in the cavity wall insulation— the cause of the condensation is due to the UPVC windows, as there is condensation in between the panes of glass, which is a sign that the seals have gone.”
Patronisingly, the letter went on to say:
“Condensation is caused when warm air meets cold surfaces; it is most likely to appear on surfaces such as windows, colder parts of walls, around door and window openings, at junctions of floors and ceilings with outside walls.”
Does it not add insult to injury for people, when they have installed cavity wall insulation and double glazing, and they are heating their homes expensively, to be told by installers and others that they should open windows to get rid of condensation? It is an appalling response.
I have been a Member of Parliament for some 28 years, and in a previous constituency, there was a lot of condensation in some difficult parts of the town. It can be a difficult issue, but it is the easiest thing in the world to avoid responsibility for. Whatever is going on in the house is said to be the fault of the householder, and it is difficult to prove otherwise.
If I may, I will finish quoting the letter from Carillion:
“I understand that this may not be the outcome that you would have hoped for. I would like to thank you for giving us the opportunity to investigate the issues you have raised.”
I wonder how many people have received a similar letter and thought, “Well, there we are. They know what they are talking about. It must be us; it must be something else.”
However, with the not unreasonable experience of over 25 years living in their house, my constituents were not prepared to accept that, and they responded as follows:
“We do not accept this decision. We have lived in this house for 28 years and have had the windows replaced. There were no problems with mould at any time. Then we had the cavity wall insulation done. The bedrooms, kitchen and living room then started to have mould growth around the windows and on the ceilings. Condensation on the windows became a real problem. When we first contacted the company about the cavity wall they sent out an inspector and he confirmed that there would be no problem to have the insulation carried out. However, when the workmen came to do the job they started muttering about being unable to do part of the house due to the fact that we had an extension. We got the impression that some parts of the house were not insulated. We are in the situation now where the whole house needs decorating but we can’t do anything because of the unsightly growth on the walls. If we had been told at the time that as a result of cavity wall insulation we would experience mould growth and condensation, we would not have gone ahead. Now
Carillion seem to think they can just say it is not their problem. We consider it is. If there was a problem in installing the cavity wall we should have been fully informed before work started.”
That is the first eight to 10 pages of my file, which contains some 100 or 150 pages that detail my constituents’ attempts to deal with the problem. To cut a long story short, CIGA has recognised, after an independent inspection of the property, which was very difficult to arrange, that the cavity wall insulation was indeed installed in a faulty manner. CIGA continues to wriggle away from any serious responsibility, however, and it has made half-hearted efforts to get the matter dealt with.
I am not simply concerned about the way in which the case has been handled, although that is pretty bad. A detailed summary of what has been done is full of attempts to contact CIGA, attempts to ensure that people take responsibility and failure to deal with things. Some 16 months after it was notified of the initial complaint, for example, Carillion came back and asked for details of what the problem was. We see people at the bottom end of the chain being given the usual run-around by those who have power and responsibility.
After some further work on the matter, I came across a freedom of information request made by Ms Dianna Goodwin, from which I will quote briefly. I thought it was a very good piece of work that demonstrated, as the right hon. Member for Southampton, Itchen has said, the close relationship between the guarantee agency and the industry. Without repeating everything that was said about the directors and so on, I will read Ms Goodwin’s conclusion:
“With assets in excess of 16 million pounds, CIGA certainly does have the resources to meet claims under their Guarantee—yet have a strong track record for blatantly ignoring and intransigently resisting claimants. The government set the parameters for this industry and the abuse of the system is just allowed to roll on year after year, unchecked. It is nothing short of a national scandal that this private and patently non-independent company is allowed to function at all, and high time the government stepped in to disband them. Proper and solid arrangements should be made for their Guarantees to be underwritten; also for an obligatory ombudsman service made available for all. What action will the government take please?”
I am pleased to add my constituents’ concerns to those raised by other hon. Members.
I am grateful for the right hon. Gentleman’s remarks and for those of Hywel Williams, who is the pioneer when it comes to raising the matter in the House. Does the right hon. Gentleman agree that because an underlying Government policy is driving the size and shape of the market, it is essential that Government take some responsibility for sorting things out? The problems would be bad enough in a free, consumer market with people buying and selling a service, but the market exists on the scale that it does because of the Government policy and obligations on energy companies.
It is quite right that Government should want to ensure greater energy efficiency by carrying out a policy such as this. We all want our homes to be warmer and our energy usage to be reduced, and insulation is a key part of that. It is essential, as the right hon. Gentleman says, when the Government are urging people to have such work done, that there is some sense that it is carried out properly. If things go wrong, the Government must accept some responsibility and work with the agencies that are charged with dealing with the matter to make sure that they are doing so.
Finally, I want to repeat a concern raised by the hon. Member for Arfon, who said that when he mentioned the issue locally, people appeared and said that it had been a problem for them. That is what worries me the most. If the Government want there to be a campaign on the matter by MPs all over the country, the best thing for them to do would be to defend what is happening and just say that they will look into it. If nothing is done, I promise the Minister that she will be back here with a room full of even more MPs, and that will not do anyone any good. Today offers a real opportunity to recognise the pain suffered by so many people and get something done, so that the agency lives up to its responsibilities and the companies involved know that they will be named and shamed for their work. The bottom line will be that consumers and our constituents will get a better service—the service that they deserve.
It is a pleasure to serve under your chairmanship, Mr Sanders. I begin by praising my right hon. Friend Mr Denham for securing the debate and for the work that he has done to remedy the cases of badly installed cavity wall insulation in his constituency. As George Hollingbery has said, cavity wall insulation, when it is correctly installed in an appropriate property, can offer substantial benefits to home owners. However, as we have also heard, badly or wrongly installed cavity wall insulation can cause serious problems. It is incredibly worrying to hear the examples of complaints and customer ordeals that right hon. and hon. Members have presented. I praise their constituents for their tenacity in attempting to resolve the problems.
I want to make it clear that I do not believe that anyone in the Chamber seeks to deride the insulation industry. Many good people are part of that industry, as I know from personal experience of meeting numerous installers in my constituency and in my Front-Bench brief, and from working closely with them on our energy efficiency plans. However, it is vital that within that industry, consumers are protected. They should always be at the forefront of any work that takes place, especially in connection with any policy promoted by the Government.
The consequences of badly installed cavity wall insulation are significant. Not only will a home owner suffer financially and personally trying to put it right, but, just as importantly, they will not receive the benefits that they should. The Minister and I regularly debate the fact that we have some of the worst levels of fuel poverty and the least efficient housing stock in Europe. Although we disagree on the best way to tackle that problem, I am sure we agree that it is unacceptable when work that is done to improve the quality of our housing stock leads to the sorts of problems that have been mentioned today. The consumer should always be the focus of any energy efficiency work, because consumer behaviour is often as important as putting in energy efficiency measures. Unfortunately, I do not think that that has been a strong enough element of the Government’s approach to energy policy, or indeed of Governments’ approach to energy policy for some time. It is vital, therefore, to ensure that there is adequate protection for consumers and that any issues are dealt with in an efficient and satisfactory manner.
It was concerning to hear the criticism by my right hon. Friend the Member for Southampton, Itchen of the Mark Group, which I have visited. It is a major installer in the UK, and usually it is a useful resource in policy in this area. I will certainly discuss with it its response to complaints such as those that he raised. The issues that have been raised today concern me greatly. Data protection laws permitting, I, and, I am sure, the Minister, would like to know more details and consider how the problems can be rectified. I press her to look into that.
Having listened to my right hon. Friend’s speech, it seems to me that there is a clear need to be better able to identify where liability resides for work that has been done, including under the energy companies obligation. There also needs to be swifter redress when complaints are made. It is my understanding that all work carried out under ECO and its predecessor schemes, the carbon emissions reduction target and the community energy saving programme, is recorded centrally. In addition, the bureaucracy for ECO is substantial—I have attacked the Government about the scale of that bureaucracy on several occasions—so I would be incredulous if it were not possible to identify the funding body for each installation relatively easily. Perhaps the Minister will clarify the situation.
Right hon. and hon. Members have raised concerns about the independence and operation of CIGA. That is a particular worry, and I will look into it further. My right hon. Friend the Member for Southampton, Itchen briefly raised the question of subcontracting, which has concerned me for some time, especially within ECO. There is no limit on the number of times energy efficiency work under ECO can be subcontracted. My right hon. Friend talked about home owners not knowing who was carrying out the work or even being surprised to find out that it was funded by an energy company. During my work in the area, many people in the industry have raised the level of subcontracting with me as a concern. I would be grateful if the Minister could touch on that subject in her reply. Is she happy with the current level of subcontracting, or would she consider placing limits on the level of subcontracting that is allowed? That happens with schemes in other Departments, such as welfare-to-work contracts.
I recognise that there are good operators out there, doing good, honest work, who genuinely care about improving the energy efficiency of people’s homes. I know that that is true, because I have met them. Cases such as those that have been highlighted today are, thankfully, relatively small in number. That is why it is so frustrating that when such cases occur, satisfactory redress is not offered. It would be an absolute travesty if people who need better-insulated homes put off having that work done. That is why the problem of badly or wrongly installed work must be dealt with swiftly and firmly. The evidence provided in the debate suggests that that is not happening at the moment. Despite the relative rarity of cases of badly installed insulation, the examples have become numerous enough to warrant today’s debate and the previous debate secured by Hywel Williams. Home owners clearly need a better response than the one that has been offered to constituents in the cases outlined today. Specifically, on a point raised by my right hon. Friend the Member for Southampton, Itchen, it cannot be right for companies to charge for remedial work for badly installed cavity wall insulation in homes where it should not have been installed or recommended in the first place.
I praise my right hon. Friend once again for securing this debate and allowing us the opportunity to discuss these issues. I hope that the Minister will address directly the concerns raised, and I am interested to hear her reply. She and I have different priorities and approaches when it comes to policy in this matter, but we both recognise the centrality of improving the UK’s housing stock through better home insulation. The issues raised in this debate are a clear threat to public confidence in that, and it is in everyone’s interests that we seek to rectify them.
I thank Mr Denham for raising the important topic of cavity wall insulation and the issue of compensation in cases where there have been problems. I will first comment on the policy generally and then move on to the conclusions from the debate and the specific requests that he made.
As I have said previously in energy debates, this Government recognise that improving domestic energy efficiency is a critical part of our strategy to deliver a secure, affordable, low-carbon energy system in this country. Consumer protection lies at the heart of the Government’s energy efficiency framework. We have built and nurtured strong relationships with a wide range of consumer protection bodies, including trading standards and Citizens Advice, and we are constantly seeking new ways to improve consumer protection.
In December, I personally sent out a joint communication with the chairman of the Association of Chief Trading Standards Officers to remind green deal market participants that it is their responsibility to uphold the green deal framework to ensure protection for all parties. When reports of potential breaches of the green deal code of practice are received, the Green Deal Oversight and Registration Body engages with the relevant authorities to investigate and address those reports, which can lead to action taken against the green deal participant, including withdrawal or suspension of green deal authorisation.
The Government set a target of 1 million homes to receive energy efficiency improvements between January 2013 and March 2015. I am pleased to say that we have already met that target and are on course to exceed it significantly; by the end of November 2014, more than 1 million homes had benefited from the installation of energy efficiency measures under the energy companies obligation and green deal framework. Cavity wall insulation has helped create millions of warm, energy-efficient homes in the UK. For many householders, cavity wall insulation is a sound financial investment, helping them save on their energy bills every year. A typical semi-detached household saves approximately £100 a year after the installation of cavity wall insulation.
I apologise for intervening so early, but I have a question to which the Minister may well not know the answer, so this will give her time for a note to be passed forward. She mentioned the possible withdrawal of green deal certification. Does the Mark Group have green deal certification?
I thank the right hon. Gentleman for his consideration of timing. I will endeavour to come back to him on that point before closing.
Since 1995, uptake of cavity wall insulation has increased significantly with the launch of successive energy-efficient home improvement schemes by this Government and the last, including schemes aimed at fuel poverty such as Warm Front, and those focused on climate change, such as the energy company obligation and the green deal, which enable homeowners to install energy efficiency measures, including cavity wall insulation. Between July 2010 and September 2014, 2.27 million homes had cavity wall insulation fitted; of those, 1.7 million did so under Government schemes. At the end of September 2014, 13.9 million homes had cavity wall insulation, or 72% of properties with a cavity wall. Up to the end of November 2014, some 462,103 cavity wall insulation installations were delivered under ECO, or 37.9% of total ECO measures, making them the most popular measure undertaken by households.
I will outline the protections in place for customers who receive cavity wall insulation. The installation of all cavity wall insulation must meet the requirements of the Building Regulations 2000, and the materials used to insulate cavity walls are subject to specific standards and must be certified by a technical approval body. To ensure the quality of installations under the green deal and ECO, installers must undergo a rigorous authorisation process to become authorised participants. Participants must comply with a publicly available specification setting out requirements for the installation of energy efficiency measures in existing buildings and levels of monitoring of those installations, including for cavity wall insulation. Furthermore, under the previous carbon emissions reduction target and community energy saving programme, and their successor schemes, the green deal and ECO, cavity wall insulation measures must be accompanied by a 25-year guarantee.
The green deal framework regulations require a green deal provider to agree, as part of any green deal plan, to guarantee the functioning of the improvements and to repair any damage to the property caused by the improvement. Under ECO and the CERT and CESP schemes before it, cavity wall insulation measures were required to be accompanied by an appropriate guarantee. Ofgem sets out the requirements for those guarantees in its ECO guidance: they must include a mechanism that gives assurance that funds will be available to honour the guarantee; the guarantee should last 25 years or longer; the guarantee must cover the costs of remedial and replacement works plus materials; there must be an assurance framework for the quality of installation and the product used in the installation. The suitability of the framework will be assessed and verification may be required through independent assessment by an independent United Kingdom Accreditation Service-accredited or other appropriate body. A list is available on the Ofgem website with details of guarantees that have been reviewed and are considered to meet the criteria for an appropriate guarantee under ECO.
I will ask the Minister a direct question put to me by one of my constituents. I said in my speech that my area is a category 4 area, and George Hollingbery said that his was category 3. Should cavity wall insulation be installed in category 4 areas at all?
The hon. Gentleman will recall that we have debated that specific subject in this Chamber previously. My recollection is that mostly it should not have been. We went through the maps to which he referred in his comments, and the concerns that it had been inappropriately installed.
To return to the context of this debate, when the issue was put before the Government, we began conversations with the Cavity Insulation Guarantee Agency, which as we heard earlier is the largest cavity wall guarantee provider. We discussed the level and nature of existing complaints in order to understand the issue in further detail. The total number of complaints received by CIGA since 2010 is 6,890 and there have been 1.5 million cavity wall insulation installations since 2010, which implies a claim rate of 0.5% since 2010. The total number of outstanding unresolved cases on which CIGA tells me it is working is 171.
I will have to return to that question to give the right hon. Gentleman a full answer. When I conclude my comments, I will address some of his specific requirements, including requesting a meeting between CIGA and my Department officials and me after this debate, and I will ensure that that is one of the questions that we address.
Before my hon. Friend leaves that point, the bright spark in all this is that we know that she will take the issue seriously, as that is her reputation, so we appreciate that she is involved. If there are so few complaints, bearing in mind how much work has been done, is there not an even greater necessity for that small number of complaints to be properly dealt with? CIGA cannot complain that it is overrun with complaints, so why should some of them have been dealt with so badly?
That is a very good question, which I will put to CIGA. My right hon. Friend is absolutely right. I will require more content from CIGA than the answers that it has given us so far. CIGA has already said that it will provide us with a list of responses to particular questions raised with it, some of which have been raised in this debate, and I will be happy to share those once they are received.
If a consumer has concerns that cavity wall insulation has been installed incorrectly, they should initially contact the installer who carried out the original work to see whether the issues can be rectified. If that does not resolve the issue, they should contact the guarantee provider. If they cannot locate their guarantee, they can try to contact the guarantee provider directly, which may have a record of their guarantee.
For measures installed under the CERT, CESP and ECO schemes, if there is no effective guarantee in place, the customer can contact the energy supplier that funded the measure originally. If the energy supplier cannot be found via Ofgem, consumers may wish to obtain further guidance from their local trading standards office or seek professional legal advice.
If there is a dispute about a green deal installation and an agreement cannot be reached between the consumer and the green deal provider, the consumer can contact the green deal ombudsman, who will investigate complaints and determine redress. Depending on the type of complaint, the ombudsman will, following their investigation, refer cases to the Secretary of State to determine redress or impose sanctions.
The green deal registration and oversight body has a technical monitoring strategy in place to ensure the full compliance of all green deal participants. Furthermore, Ofgem mandates technical monitoring of installation standards under ECO and the predecessor CERT and CESP schemes, and it requires ECO installers to contract for independent inspections of 5% of all measures installed, including cavity wall insulation, to ensure that they meet the required standards. Hywel Williams said that 5% is inadequate. I will consider his comments and speak to Ofgem about whether it is sufficient. He said that in his experience it is not sufficient, so I will come back to him on that issue.
Adding to the list of things for the Minister to come back on, there is also the issue of installations carried out under CERT and CESP. It is clear from Ofgem’s Freedom of Information Act replies that it does not have those data. My hon. Friend Jonathan Reynolds suggested that sufficient paperwork should be held somewhere to enable the match to be made between householders and energy suppliers, even under the two earlier schemes. Can the Minister advise us where that information is held? Will she make every effort to identify that information for each of those historical cases?
Jonathan Reynolds was right to raise that issue. I will review the regime for the legacy issues now and after the ECO regime expires in 2017. I agree that we need clarity about what happened in the past, and that we must make improvements for the future.
Let me move on to the suitability of cavity wall insulation for different properties. As my hon. Friend George Hollingbery said, not all properties are suitable. The hon. Member for Arfon and I have discussed that issue previously in this Chamber. A dwelling is suitable for standard cavity wall insulation if its external walls are unfilled cavity walls, the cavity is at least 50 mm wide, its masonry or brickwork is in good condition and its walls are not exposed to driving rain. It is important that cavity wall insulation is installed only in suitable homes and to the required standards. Pre-installation surveys are key in identifying suitable properties. Cavity wall insulation is not suitable in homes that are exposed to wind and driving rain, as my hon. Friend the Member for Meon Valley said.
The British Standards Institution’s regulations offer a step-by-step procedure for assessing properties’ suitability for cavity wall insulation and provide guidance for assessing exposure by looking at topography, shelter and rain spells. Technical certifications—for example, the BBA certificates—state how and where products can be used.
Members who have spoken in this debate have said that they want complaints to be properly handled, however many there are, and their constituents to get proper redress. It is clear that more needs to be done. The right hon. Member for Southampton, Itchen asked about the Mark Group. I can confirm that it is an authorised green deal provider. He requested several commitments from me, and I want to state clearly for the record that my Department and the Government take very seriously the concerns that have been raised about people’s homes. People’s homes are not just an asset or something that costs them money; they are essential to their livelihoods and well-being, which is why we take this issue so seriously.
I will speak to Ofgem, and I will write to it to ask for a summary of the number of complaints it has received and its view on that. I will consider conducting a review. I will consider the case for introducing independent oversight for all guarantees, not only those under CIGA. Concern about the guarantees, their implementation and access to them has been one of the features of this debate. I am concerned about the level of transparency—an issue that has been raised. The right hon. Member for Southampton, Itchen and others said that they were concerned about the independence of the directors of CIGA. I will have discussions with Ofgem about that issue.
The hon. Member for Arfon asked whether it would be possible to return to CWI properties after two years to ensure that the insulation was correctly installed. I will consider putting in place an independent assessment to look at properties two years after installation. I will also consider regulating the initial sales conversation—the right hon. Member for Southampton, Itchen raised that issue and quoted from various sales conversations. I have listened to the personal stories that Members have put on the record.
The Minister has given us a list of things that she will consider. I agree with Alistair Burt, who said that the Minister will take those things seriously and pursue them. However, the dissolution of Parliament is approaching, and I and others will leave this place. Will she give me the satisfaction of promising to consider these issues and come up with answers before 30 March? It would be a great shame if she were to take this issue forward and, for whatever reason, not find herself in the same position after the election. It is not unreasonable to ask her, in just under two months, to consider these issues and report back to the House.
I thank the right hon. Gentleman. He is absolutely right that I take this issue seriously and that I intend to get some answers on it. I commit to writing to him before Parliament dissolves to update him on where I am. I will do my level best to get as many answers as possible to address the concerns that he raised. I will start by making the points that I just outlined to Ofgem and asking for a meeting with CIGA to raise those complaints and issues.
On exactly the same point, I reassure anybody following the debate elsewhere that if my hon. Friend the Minister is not able to complete that work and get all the answers in that time, it will be possible to pursue these matters in the next Parliament, if the good people of North East Bedfordshire and Arfon allow it. Therefore, there should not be a break in our concerns. Our constituents can be reassured that the matter will be carried through, even if some distinguished right hon. colleagues will no longer be with us.
My right hon. Friend is absolutely right. Despite my commitment to come back to the right hon. Member for Southampton, Itchen with answers by the end of March, we are unlikely to have fully resolved the serious complaints and issues that have been raised here. I am sure that the future Minister, whoever they are, will continue that work, and I will ensure that it is left in good order for them. However, I hope I will be back in this role.
I thank all right hon. and hon. Members for their comments. I want to reassure their constituents that we take this issue very seriously, and I will continue to take a personal interest in it. |
B-Sides: 8-Track
A series highlighting up-and-coming artists that are worth checking out
Each week reporter Jessica Myers finds rising artists that students might want to tune in to.
Introduction:
With the ability to publish a music blog on such a source as State Press, I am taking the opportunity to share eight songs from underrated artists that I tend to listen to too often and get me incredibly emotional as they are.
So, here’s my mixtape, similar to what your on-and-off boyfriend would give you in the ‘80’s. I hope it’s not as cheesy as Brad’s “Songs that remind me of your eyes” tape. My mix to you consists of eight underrated tracks that you need to listen to.
1.”Life + You” by infinite bisous
Let’s start off slow… and build up, shall we?
That’s what happens in this incredibly intergalactic track, "Life + You."
This track is in no rush, and begins with twisty, ringing synth sounds. It gradually builds up, like you're on a spaceship preparing for liftoff.
At one minute and thirty seconds the song blasts into spacey electronic sounds. This track covers the struggle of taking one day at a time to battle the emptiness of loneliness and loss. This song sounds like you're floating alone in space. It's sobering, yet dizzy sprit makes this psychedelic, electro-pop track send chills down your spine. Infinite bisous ends the song with an ominous, elongated extension of the sound "ah," turning it into almost a yell, which is a haunting ending to this great track.
2. “Suddenly” by Drugdealer
I’m all for new music sounding like it’s made from decades back and this song gives you major old-school vibes through airy vocals and whopping instrumentals.
There are some nice, funk feelings throughout this track and there is a tasteful touch of saxophone used. This song has different transitional layers, keeping this soothing, chill number naturally easy to listen to all the way through.
This tune is great for any '70s themed party you have, so queue her up and get ready to boogie.
3. “12:34 AM” by Billy Lemos, Maxwell Young and Amar Opollo
I found this track on Spotify's "Discover Weekly" playlist and was hooked by the smooth R&B vocals in the beginning. As I continued listening to this song, I became incredibly excited by the club break half-way through the short track and Gambino-esque vocals.
This is a shorter song, but I’ve put it on all my party playlists. It’s r&b, club and has a little alternative edge to it.
This song is perfect for any late-night highway drive.
4. “Eclipse” by Inner Wave
This track is not the most upbeat number, but when I saw Inner Wave by chance at the Crescent Ballroom in Phoenix a few weeks back, people were surprisingly moshing to this song. It has this low-key, catchy energy to it, but when you think about it, you could definitely bop to it.
The high-in-the-register vocals paired with a catchy flow makes this track super rhythmic. You'll want to tap your feet, wiggle your shoulders and maybe even go see the band live and throw your body onto your neighbors.
5. “Star Destroyer” by Jonah Renna
When a good friend of mine sent me a link to this song, I was immediately interested due to the name of the track — I have a weird fascination with space if you can't already tell. As the journey that is listening to this song for the first time began, I came to the realization of how much I needed this in my life, and maybe you need it too.
Not only does this song use an incredible amount of synth and has a great tempo, the lyrics sting, capturing the struggle of trying to grasp onto something that no longer exists. This song sums up the feeling of holding onto the past and clinging onto distant memories.
The line “You don't love me no more,” is repeated throughout the song, a repetitive recognition of a harsh realization.
There’s also a rap break. I’ll leave my favorite lines here:
"But if the end is near imma need someone to spend it with / 'Cause love isn't infinite, but death is inevitable / And if heaven does exist, I hope it ain't different / Than how it is with you..."
6. “Chariot” by Beach House
Really, Jess, a Beach House song? Beach House isn’t underrated.
To which I reply, stop right there. It’s my blog and I’ll add a Beach House song if I want to.
Beach House released a “B-Sides and Rarities” album last year and it was extremely overlooked. As a huge Beach House fan, I became extremely overwhelmed by the release of new-ish Beach House music and listened to the whole thing through immediately…
I could not tell you how many times I have listened to the song “Chariot” for the sole purpose of hearing the lines, “But you said, ‘Angel's wings, time we spent ’/ Nobody knows how close it will come / The rite of the sands / My heart in your hands.”
Every single time Beach House's lead vocalist Victoria Legrand sings these lines, my heart breaks a little, in a good way. This track also has an infectious cadence, and, of course, has high points that Beach House songs tend to have that remind you that you are a tiny speck on a spinning ball endlessly floating through space.
7. “Flirting in Space” by Brad stank
Did someone mention space? This jazzy track is filled with low, smooth vocals and creamy guitar solos. Musical artist Brad stank seduces you with his reflective tune and and laid-back charm.
This song’s slow rhythm paired with high-chimed guitar solos makes this number, dare-I-say-it – undeniably groovy and charming.
If I was flirting in space with some intergalactic being, I’d want this song to be playing.
8. “Otherside” by Perfume Genius
Mike Hadreas, the legend behind the name Perfume Genius, gently placed this insanely poetic and stunning number at the forefront of his 2017 album “No Shape.” This track is reminiscent of a rainy day epiphany.
This song deserves my favorite space on playlists, albums, anything – The. Last. Song. You know an album is truly great when the last song sums up the spirit, emotions and overall feeling of the entire album. Hadreas switches it up and does this in the beginning of his album with this track.
It opens with this chalky, sad little piano number, ringing in gentle feedback sounds. I like to think of the feedback hums as soft, almost inaudible rain, especially here in Arizona. The sound is comforting. His layered vocals wave over the instrumentals, blowing past you like a gentle wind. The contrast of layered vocals and stand-alone vocals add emotion to this powerful track. As he sings the line “Rocking you to sleep / From the Otherside” for the very first time, the mood immediately lifts you into a whole new realm of existence as there is a ringing explosion of sound and vision.
This song is a startlingly, eruptive welcome to Perfume Genius’ fourth album, and an extremely emotional goodbye to this playlist. Give it a listen and you’ll understand it all.
Reach the reporter at jlmyer10@asu.edu or follow @jessiemy94 on Twitter.
Like The State Press on Facebook and follow @statepress on Twitter. |
Spunti e ricerche
Spunti e ricerche is an annual peer-reviewed academic journal that covers research in Italian studies. Individual volumes often consist of articles on a broadly defined theme, on a particular writer, or on various subjects. The editors-in-Chief are Raffaele Lampugnani (Monash University), Annamaria Pagliaro (Monash University), Antonio Pagliaro (La Trobe University), and Carolyn James (Monash University). Although other Australian journals pre-dated Spunti e ricerche, such as the now defunct Altro Polo (1978-1996), Spunti e ricerche is the oldest active academic journal in Australia specifically devoted to Italian studies.
History
Spunti e ricerche was the brain child of a number of tutors from the Italian Department of the University of Melbourne in 1981-1982 and the first volume appeared 1985. It was intended to foster, increase, and diversify Australian research and enquiry into Italian studies, specifically in the fields of Italian literature, politics, linguistics, history, society, cinema, and art, but also had the broad aim to encourage manifestations of the culture by Italians, or about Italians, in Australia.
Abstracting and indexing
Spunti e ricerche is indexed and abstracted in:
International Bibliography of Book Reviews of Scholarly Literature on the Humanities and Social Sciences
International Bibliography of Periodical Literature
MLA International Bibliography
External links
Category:Publications established in 1985
Category:European studies journals
Category:Annual journals
Category:Multilingual journals |
Just when we thought the world of beauty couldn’t get any more complicated, we learn that something called “sandbagging†exists. It sounds complex, doesn’t it? Double sigh.
Now, raise your hand if you’ve ever had your makeup run.You spend a ton of time getting your makeup artist on and then, just like that, all your hard work is ruined. Suddenly, that fancy product you bought at Sephora on Treat Yo’ Self Tuesday just doesn’t seem worth it anymore.
Sandbagging essentially uses the concept of sandbags and applies it to makeup artistry. Since sandbags are put down to prevent flooding, the same thing can be said of sandbagging: It prevents all your creative handiwork from running and/or smudging.
BuzzFeed reveals that this technique was developed by Kim Kardashian’s makeup artist, Mario Dedivanovic, and it helps to keep her makeup #flawless. All you have to do is apply your base as you normally would. Then, after this step in complete, you use a makeup sponge to place a ton of loose powder under your waterline and around your mouth.
Once you can see a visible line of powder on these areas, then proceed with the rest of your makeup routine, leaving the powder exactly where it is. Finally, brush off the excess powder when you’re done!
What the powder does is absorb any excess oils that threaten your makeup’s staying power. “It stops eyeshadow, kohl pencils and mascara from smearing onto the lower lash line. This will includes lower mascara too!†explains makeup artist Wayne Goss. “Its really very similar to ‘baking’ but with this technique you are going right to the lower lash line. As far up as you can go. Then allow it to sit for a while, then brush off. NOTHING will smear, smudge or move again! Promise!†|
Technology Learning Sessions
In December staff have an opportunity to attend voluntary PD
Purpose:
*The sessions are first come first serve and will be capped at 30 participants. The last day to register is 12/11/2020.
Register for PD Sessions On Wednesday, December 16, 2020
PD Session Descriptions Below
Strategies for Engaging Students in a Virtual Class
Time: 3:20 pm to 4:20 pmRecommended Grade Level: 9-12
Facilitator: Christopher Nesi
Description:
In this session, participants will come away with strategies for engaging students in the virtual class setting. Participants will have the opportunity to share their best practices as well as leave with strategies they can try tomorrow.
Virtual Instruction with Pear Deck
Time: 3:20 pm to 4:20 pmRecommended Grade Level: K-12
Facilitator: Jimmy Pineda
Description:
Pear Deck is a resource that can be utilized in any virtual classroom. Pear Deck is a great way to connect and engage students with different types of questions, and students are able to respond in real-time, especially now during virtual instruction. Adding audio to instruction and providing feedback to student responses are new valuable features. There are many other features that will promote effective instruction and student engagement/learning when implementing Pear Deck in your classroom.
Organizing Your Google Drive Like A Pro (*Session Has Reached capacity)
Time: 3:20 pm to 4:20 pmRecommended Grade Level: K-12
Facilitator: Jason Caputo
Description:
Tired of losing your files in google drive? Know you had a great lesson on this topic...if only you could find it! After this session, you'll never lose a file again. You'll learn how to use folders, shared drives, and smart file names alongside sorting and searching functions in order to keep your Google Drive in perfect order.
Create Your Own Adventure
Time: 3:20 pm to 4:20 pmRecommended Grade Level: K-8
Facilitator: Lisa Capote Llerena and Jamie Schoenbach
Description:
This PD will focus on Creating Adventures through Google Slides and Google Forms to engage students during virtual learning as well as implement fun educational activities for teachers to include in their lessons.
Engaging Your Class With Nearpod
Time: 3:20 pm to 4:20 pmRecommended Grade Level: K-8
Facilitator: Eric Maitland
Description:
Nearpod is the program you never knew you needed in your classroom. From running your class to administering exams, this program can transform your classroom into a high-paced engaging experience for you and your students virtually or in-person. Its various tools put a plethora of options in the hands of teachers of any grade. In this class, I will be introducing you to some of the basics of the program and the tools I use the most for my own classes. You can come knowing nothing, but it would be helpful if you create a free Nearpod account and add the free extension onto google chrome beforehand. |
Introduction {#Sec1}
============
Contagious caprine pleuropneumonia (CCPP) caused by *Mycoplasma capricolum* subspecies *capripneumoniae* (*Mccp*) is a severe and devastating respiratory disease with high morbidity and mortality in goats (Sadique et al. [@CR31]; Tsehay et al. [@CR39]), causing considerable economic losses (Asmare et al. [@CR2]). It occurs in many countries in Africa, Asia, and Middle East (Prats-van der Ham et al. [@CR29]) and is a classical trans-boundary animal disease (Shahzad et al. [@CR34]). Moreover, the disease is included in the list of notifiable diseases of the World Organization for Animal Health (OIE [@CR27]) as it threatens a significant number of goat populations throughout the world and has a considerable socioeconomic impact in infected territories (Atim et al. [@CR3]). Though disease is mainly found in goats, subclinical cases were reported in sheep and some wild ruminant species (Asmare et al. [@CR2]).
The classical disease caused by *Mycoplasma capricolum* subspecies *capripneumoniae* (*Mccp*) is predominantly respiratory (Thiaucourt et al. [@CR37]). Typical cases of CCPP are characterized by extreme fever (41--43 °C), and high morbidity and mortality in susceptible herds affecting all ages (AU-IBAR [@CR4]). Associated common clinical signs are anorexia, weakness, emaciation, dullness, exercise intolerance, and respiratory signs such as dyspnea, polypnea, coughing, and nasal discharges (Shahzad et al. [@CR34]). Further, abortion and high mortality rates have been reported (Wazir et al. [@CR40]).
Commonly used serological tests are indirect hemagglutination, complement fixation, and latex agglutination (LAT) to detect the antibody response of goats to Mccp (Samiullah [@CR32]). Recently, a competitive enzyme-linked immunoassay (cELISA) for CCPP has been developed and found highly specific (Peyraud et al. [@CR28]). The introduction of the cELISA for CCPP will permit the implementation of serological studies on a large scale (Younis et al. [@CR43]). In addition to serological tests, molecular detection of *Mccp* directly in clinical samples was found highly sensitive and specific and should be used for diagnosis of CCPP, especially in outbreaks to confirm the disease for rapid control (Elhassan and Salama [@CR9]).
In Ethiopia, goats play a unique role in the livelihood of pastoral communities, especially for women, as they provide milk and dairy products and are a source of income for the family to cover school fees for children and other family expenses. Despite the presence of a massive goat population and their important socio-economic role, health of small ruminants in general and goats in particular has received little attention so far (Lakew et al. [@CR19]). Only few studies have been carried out in the area, but these showed that CCPP is prevalent and causes considerable mortality in goats. For instance, between 2011 and 2015, 83 outbreaks affecting 23,950 goats were reported (MoLF [@CR24]). Hence, reliable epidemiological information is needed in order to design effective control measures. Specifically, antigen detection of *Mccp* and the role of sheep in the maintenance of the disease need to be explored. The objectives of the study were to assess the epidemiology of CCPP in the Borana zone and to characterize the causative agent using molecular techniques.
Materials and methods {#Sec2}
=====================
This study was conducted in the Borana zone that is predominantly inhabited by the Borana community and extends to the Kenyan border in the South; Somali region in the South East; Southern Nation, Nationalities, and People Region (SNNPR) in the West and North; and Guji zone in the North East. Borana rangeland is characterized by a semiarid to arid climate (Kamara et al. [@CR17]; Haile et al. [@CR14]). Geographically, the area is located between from 4 to 6° N latitude and from 36 to 42° E longitude with altitude ranging from 1000 to 1700 m above sea level. The mean annual rainfall of the area ranges from 250 to 700 mm. The annual mean temperature varies from 19 to over 25 °C. Extensive pastoralism is the main means of livelihoods for the Borana people (Gelagay et al. [@CR11]).
Multistage random sampling was applied to select the study animals. The sampling frame comprised a list of all districts in the zone and pastoral associations (PAs) or villages. Three districts were selected randomly, and in each of them, two PAs where no CCPP vaccination had been conducted for more than 2 years were selected. The resulting six PAs/villages were Areri and Adegalchet from Elwoya, Tile Mado and Dambi from Moyale, and Dida Yabello and Harwoyu from Yabello (Fig. [1](#Fig1){ref-type="fig"}).Fig. 1Map of Ethiopia showing study areas
Finally, data were collected from a total of 161 households residing in the study villages. The distribution of households across the villages was 29, 30, 29, 20, 30, and 23 households from Adegalchet, Areri, Dambi, Tile Mado, Dida Yabello, and Harwoyu respectively. A total of 789 goats from 161 households in the selected PAs were sampled. Beside serum sample collection from the districts, randomly selected households (*n* = 161) who have small ruminants were interviewed using semi-structured questionnaire to capture general information they had on CCPP. If the flock size of a household was greater than five, 4 to 9 goats were selected from each flock whereas all goats per household were sampled if the flock size was less than or equal to five. In addition, 101 in contact sheep were selected purposively. The goats selected were identified with ear tags and information on household profiles and attributes of animals was collected before sampling. The age of animals was estimated using information from owner and dentition. Besides sero-samples, pleural fluids, and lung tissue samples were collected from sero-positive, clinically affected goats for molecular and bacteriological investigations. During sampling, recently introduced animals were excluded to avoid the risk of including vaccinated animals. To categorize flock size into small, medium, and large, five key informants were used from each district.
Sample size estimation {#Sec3}
----------------------
The estimation of sample size for epidemiological investigation using serological assay was done using the formula given by Thrusfield ([@CR38]) considering 95% confidence level, expected prevalence of 31.6% (Lakew et al. [@CR19]) and 5% absolute precision.$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ n=\frac{1.96^2{P}_{\mathrm{exp}}\;\left(1-{P}_{\mathrm{exp}}\right)}{d^2} $$\end{document}$$
where,*n*required sample size*P*~exp~expected prevalence*d*desired absolute precision (5%)
Accordingly, a minimum of 332 goats was obtained. To account for intra-class correlation at herd, village, and district levels, a design effect of 2 was considered, resulting in a minimum sample size of 664 (calculated with EpiInfo 7.2).
Blood sample collection {#Sec4}
-----------------------
Approximately 5--7 mL of blood was collected from jugular vein of apparently healthy goats and sheep not involved in vaccination against CCPP for at least 2 years for serological examination using sterile vacutainer tubes and needles. Samples were then transported in an icebox to the microbiology laboratory of the Yabello Pastoral and Dryland Agriculture Research Center. The sera were separated after centrifugation at 1500 rpm for 10 min. The serum samples were collected into sterile cryogenic tubes and stored at − 20 °C until the samples were transported to the National Animal Health Diagnostic and Investigation Center (NAHDIC), Sebeta, Ethiopia, for analysis.
Collection of tissue samples {#Sec5}
----------------------------
Three goats that were positive in the cELISA test or which were suspected to be clinically affected by CCPP after thorough clinical examination were purchased and sacrificed for postmortem examination. Gross pathological lesions were observed and samples of lung at the interface between the consolidated and unconsolidated healthy tissues and pleural fluids were collected and transported to the National Veterinary Institute (NVI), Bishoftu, Ethiopia, for molecular analysis using polymerase chain reaction (PCR) as described by Woubit et al. ([@CR42]).
Laboratory analysis of samples {#Sec6}
------------------------------
The serum samples were examined for the presence of specific antibodies against *Mccp* by using a commercial cELISA (Idexx, France), according to the instructions of the manufacturer. The test is characterized by a specificity of 99.9%. At the end of the reactions, ELISA plates were read at 450 nm by BioTek ELx800 ELISA reader to determine the optical density and percentage of inhibition was calculated. Samples with percentage of inhibition greater than or equal to 55% were considered positive (Peyraud et al. [@CR28]).
Polymerase chain reaction {#Sec7}
-------------------------
Samples for polymerase chain reaction (PCR) were prepared as described by Woubit et al. ([@CR42]). About 1 g samples from each lung tissue and bronchial lymph nodes was taken and chopped with scissors and then grinded by mortar and pestle; mixed with 9 mL phosphate buffer solutions (PBS) and transferred to test tubes. For pleural fluids, 1 mL of pleural fluid was taken and mixed with 9 mL PBS and subjected for DNA extraction. Primers used (Mccp-spe-F, 5′-ATCATTTTTAATCCCTTCAAG-3′ and Mccp-spe-R, 5′-TACTATGAGTAATTATAATA-TATGCAA-3′) amplify a DNA fragment of 316 bp; PCR conditions were set as described by Woubit et al. ([@CR42]).
Data analysis {#Sec8}
-------------
Data collected from the field and laboratory assays were entered and stored in Microsoft Excel spreadsheet, screened for proper coding and errors, and analysis was done. Disease prevalence and odds ratio were calculated using STATA 13.0 (Stata Corp. 1985--2013) statistical software. Logistic regression analysis was used to measure association between potential risk factors and sero-prevalence. Variables with *p* value of less than 0.05 were included in multivariable analysis and multivariable model was fitted. Finally, odd ratios and 95% confidence interval were calculated and disease-associated risk factors with a *p* value less than 0.05 considered significant.
Results {#Sec9}
=======
Survey result on symptoms of CCPP observed {#Sec10}
------------------------------------------
During the current survey, different and common overall symptoms of CCPP mentioned by respondents in the three study districts are coughing, fast breathing, depression, sudden death, inappetance, diarrhea, rough hair coat, nasal discharge difficulty in breathing, and reluctant to walk with 42%, 26.7%, 11.5%, 6.5%, 5.4%, 3.4%, 2.2%, 1.3%, and 1% respectively as indicated in Fig. [2](#Fig2){ref-type="fig"}.Fig. 2Clinical symptoms of CCPP as reported by respondents (*N* = 161)
Sero-prevalence and associated risk factors for CCPP in goats {#Sec11}
-------------------------------------------------------------
From the 161 households involved in the study, sera of 789 animals were collected. Sero-positivity was detected in all localities surveyed as depicted in Fig. [3](#Fig3){ref-type="fig"}. Two hundred forty-six (31.2%) of collected sera tested positive for anti-*Mccp* antibodies. The highest prevalence (36.70%) was observed in Moyale district, followed by Yabello (32.7%) and Elwoya (22.6%) (Table [1](#Tab1){ref-type="table"}). The difference in sero-prevalence between districts was statistically significant (*p* = 0.001). There was also a significant difference in the sero-prevalence CCPP between different age groups (*p* \< 0.001) in which adult goats (37.3%) were more likely to test positive than young goats (24.7%). Higher sero-prevalence was recorded in female goats (32.1%) than in males (29.1%) although this difference was not statistically significant. Similarly, sero-prevalence of CCPP was 34.3%, 32.2%, and 28.8% in small, medium, and large flock sizes, respectively. However, the difference in prevalence among various flock sizes was not statistically significant.Fig. 3Proportion of sero-positivity in goats by localityTable 1Results of univariable analysis to identify risk factors of sero-prevalence of CCPP in goats in Borana zone, Oromia, EthiopiaRisk factorsNumberTest positivePrevalence*X* ^2^*p* valueDistrict13.6180.001 Elwoya2525722.6 Moyale33212236.7 Yabello2056732.7Sex0.730.393 Female53517232.1 Male2547429.1Age14.455\< 0.001 Adult40515137.3 Young3849524.7Flock size Small1756034.31.82 0.402 Medium2678632.2 Large34710028.8Overall78924631.2
Fitting a multivariable regression model revealed that among the risk factors considered in the analysis (Table [2](#Tab2){ref-type="table"}), district and age were associated with sero-positivity (*p* \< 0.05), whereas sex and flock size had no statistically significant effect. The results showed that animals in Moyale and Yabello districts had about twice and 1.6 times higher odds of being positive for CCPP, respectively, than those animals reared in Elwoya district. Similarly, the odds of CCPP sero-prevalence was observed to significantly increase by 1.5 times as age of animals increase by 1 year (Table [2](#Tab2){ref-type="table"}).Table 2Results of multivariate logistic regression analysis of sero-prevalence of CCPP in goatsRisk factorOdds ratioStd. Err.*zp* \> \|*z*\|(95% confidence interval)District Moyale2.0500.39823.7\< 0.001(1.401--2.999) Yabello1.6110.34572.220.026(1.058--2.453)Sex Male0.9240.157− 0.470.64(0.662--1.289)Age in year1.4720.1573.64\< 0.001(1.195--1.814)Flock size Medium1.1720.2110.880.378(0.823--1.669) Small1.4290.2971.720.086(0.951--2.146) \_cons0.1370.036− 7.550.000(0.081--0.229)
Sero-prevalence and associated risk factors of CCPP in sheep {#Sec12}
------------------------------------------------------------
From a total of 101 serum samples collected from apparently healthy sheep and tested by cELISA, 13 (12.9%) were found positive. The differences in sero-prevalence between age groups, sex, and districts examined were not statistically significant (*p* \> 0.05) as presented in Table [3](#Tab3){ref-type="table"}.Table 3Results of multivariable logistic regression analysis of associated risk factors of CCPP in sheep in the study areaRisk factorsOdds ratioStd. Err.*zp* \> \|*z*\|(95% Conf. interval)Age in year1.1850.6030.330.738(0.437--3.213)Sex Male0.3200.274− 1.330.184(0.059--1.715)District Moyale1.0150.7510.020.984(0.238--4.329) Yabello0.8850.697− 0.150.877(0.189--4.138) \_cons0.1500.163− 1.740.081(0.018--1.264)
Results of gross pathological examination {#Sec13}
-----------------------------------------
Gross pathological changes observed in three goats showing clinical signs of CCPP include accumulation of fluids in the pleural cavities, adhesion of lungs to the thoracic wall, frothy discharge in the trachea, enlarged bronchial lymph nodes, pneumonic lung tissues, and pleural fluids containing large clots of fibrin (Fig. [4](#Fig4){ref-type="fig"}).Fig. 4Postmortem finding of CCPP infected goats. **a** Accumulation of lung exudate in thorax cavity; **b** fibrous adhesion of lungs to the chest wall; **c** froth in the trachea; **d** enlarged respiratory (mediastinal) lymph nodes; **e** lung with areas of pneumonia; and **f** lung exudate containing large clots of fibrin
Mccp detection and confirmation using conventional PCR {#Sec14}
------------------------------------------------------
A total of 8 samples (three lung tissues, three pleural fluids, and two bronchial lymph nodes) collected from three clinically affected goats that tested positive in the cELISA were analyzed by conventional PCR. Upon PCR amplification of the genomic DNA from the 8 samples and controls using species-specific Mccp primers, Mccp was detected in 6 (75%) samples.
The specimens that tested positive include three lung tissues (lane 1--3) and three pleural fluids (lane 5, 6, and 8) whereas the samples of the other bronchial lymph node (lane 4 and lane 7) tested negative. The results of PCR analysis are depicted in Fig. [5](#Fig5){ref-type="fig"}. The fragment size of the amplified products was 316 bp.Fig. 5Agarose gel electrophoresis of PCR products (316 bp) amplified with *Mccp*-specific primers. Lane M: 100 bp DNA molecular weight marker; lane P: positive control; lane N: negative control; lane E: extraction control; lanes 1--8: samples
Discussion {#Sec15}
==========
The main objective of this study was to estimate the sero-prevalence and confirm the presence of CCPP in selected districts of Borana zone. The study revealed that CCPP is a major health constraint of goats in Borana pastoral areas. This was confirmed by a general sero-prevalence of 31.2% and by the detection of *Mccp* in the samples collected from the three suspected cases. The current study indicated the confirmation of the case directly from clinically affected goats for the first time in the study area. It has been shown previously that several outbreaks of CCPP reported in the country were from Oromia, the majority of which were from Borana (MoLF [@CR24]). The previous reports of outbreaks were based on the clinical signs. This study, however, provided confirmation of CCPP cases with molecular techniques and provided reliable information on the presence of *Mccp* in Borana area. This has important implication for the wellbeing of the pastoral community.
The overall sero-prevalence of 31.2% reported in unvaccinated goats in this study shows that *Mccp* has been established and is circulating in the area. For unvaccinated population of goats this figure is high and requires attention of the veterinary and livestock authority of the area to minimize the effect CCPP has on livelihoods in the community. The overall prevalence of CCPP in the present study was higher than the national prevalence estimated from pooled sero-prevalence (25.7%) through a systematic review by Asmare et al. ([@CR2]) and is largely in agreement with the previous findings from Ethiopia (Lakew et al. [@CR19]) in which 31.6% of goats in Borana were found to be positive to CCPP. Similar observations were also made earlier in goats at an export abattoir at Bishoftu, Ethiopia (Eshetu et al. [@CR10]), and Southern Ethiopia, in Tigray and Afar (Hadush et al. [@CR13]), and in Beetal goats in Pakistan (Sherif et al. [@CR35]; Hussain et al. [@CR16]). Thus, our findings show that little has changed over the years, and the efforts made to control the disease with vaccinations have not resulted in sufficient vaccination coverage to prevent spread or contain the disease. This was also reflected by the fact, that it was easy to find villages in which goats had not been vaccinated against CCPP.
In contrast to our findings, lower prevalence of CCPP has been reported earlier from different parts of Ethiopia (Yousuf et al. [@CR44]; Tesfaye et al. [@CR36]; Mekuria et al. [@CR23]; Mekuria and Asmare [@CR22]; Aklilu et al. [@CR1]; Regassa et al. [@CR30]). Lower CCPP sero-prevalence has also recently described in Pakistan (Shahzad et al. [@CR34]; Wazir et al. [@CR40]). On the other hand, higher sero-prevalence of 44.5%, 47.3%, and 51.8% was reported from Dire Dawa, Afar, and Oromia regions of Ethiopia, respectively, by Gizawu et al. ([@CR12]). Hadush et al. ([@CR13]) also reported higher prevalence as 38.6% and 43.9% from Afar and Tigray regions of Ethiopia, respectively. In other parts of the world, higher prevalence than our observation has been documented in Beetal, Pakistan (Shahzad et al. [@CR33]), Tanzania (Mbyuzi et al. [@CR21]; Nyanja et al. [@CR26]), Kenya (Kipronoh et al. [@CR18]), Uganda (Atim et al. [@CR3]), and Turkey (Cetinkaya et al. [@CR7]). An international collaborative study done by Peyraud et al. ([@CR28]) also reported sero-prevalence of 6 to 90%, 14.6%, 16%, 10.1%, 0%, and (2.7%, 44.2%) from Kenya, Ethiopia, Mauritius, Tajikistan, Afghanistan, and Pakistan, respectively, using monoclonal antibody--based cELISA. The observed variation in sero-prevalence reported from different studies may be due to differences in the husbandry practices, agro-ecology, vaccination history, sampling methods applied, and sample size used.
In our study, the sero-prevalence of CCPP was significantly lower in Elwoya than in Moyale and Yabello. This observation agrees with the reports of Wazir et al. ([@CR40]) who reported significantly different prevalence among geographical areas. However, it is contrary to the previous findings (Eshetu et al. [@CR10]; Hadush et al. [@CR13]; Sherif et al. [@CR35]). The higher prevalence in Moyale and Yabello compared to Elwoya observed in this study could be due to differences in frequency of animal movement in the districts. Moyale is a district bordering Kenya. There is free movement of animals between the two countries in search of market and pastures. Pastoralists in the area often cross the border for marketing purposes as well as in search of feed and water mostly during the dry season and during droughts. There is also free movement and contact with animals from neighboring Somali pastoralists in Moyale. Yabello is the center of Borana zone, where animals from surrounding PAs are being moved to for veterinary services and marketing. Therefore, the higher prevalence of CCPP in these two districts is probably due to animal movement for marketing and in search of water and pasture.
The serological test results showed the presence of anti-*Mccp* antibodies in all age groups of goats and sheep. However, the results of sero-prevalence study showed that age had significant effect on the occurrence of infection with *Mccp* in Borana zone, reflecting the fact that older animals have higher chances to be exposed to the pathogen. This observation is in consent with the findings of Aklilu et al. ([@CR1]) who reported that adult goats were 1.84 times more likely to be sero-positive than kids. Our findings also agree with the report of Mekuria and Asmare ([@CR22]), Bekele et al. ([@CR5]), Yousuf et al. ([@CR44]), Sherif et al. ([@CR35]), Nyanja et al. ([@CR26]), and Lakew et al. ([@CR19]) who observed the presence of significant variation among age groups. However, the finding of this study contradicts with the works of Gizawu et al. ([@CR12]), Nicholas ([@CR25]), Eshetu et al. ([@CR10]), and Hadush et al. ([@CR13]) who observed the absence of association between age and occurrence of CCPP.
In this study, sheep kept along with goats were found to be sero-positive in all PAs except in Areri. That is, sheep in contact with infected goats were sero-positive. In consent with our observation, previous authors showed that sheep were sero-positive from different parts of Ethiopia. For instance, 13% of sheep were found sero-positive by Dawit ([@CR8]), 7.14% by Gelagay et al. ([@CR11]), and 47.6% by Hadush et al. ([@CR13]). In Tanzania, sero-prevalence estimates of 36.7% and 22.9% from sheep serum were reported by Mbyuzi et al. ([@CR21]). In addition to this, there are reports describing the isolation of *Mcpp* from sheep with respiratory disease returning to Eritrea with refugees from Sudan (Houshaymi et al. [@CR15]), from healthy sheep in Kenya that have been in contact with goat herds affected by CCPP (Litamoi et al. [@CR20]), from sick sheep mixed with goats in Uganda (Bolske et al. [@CR6]), and elsewhere in the globe by Cetinkaya et al. ([@CR7]) from lung and nasal swab of sheep. This raises questions on the role of sheep as a reservoir and contributing to maintaining transmission of *Mccp*. The exact role of sheep in the maintenance and spread of *Mccp* to goats needs to be further investigated.
Our finding of CCPP gross lesions at postmortem which revealed lung exudate containing large clots of fibrin, adhesion of lungs to the thoracic wall, froth in the trachea, enlarged bronchial lymph nodes, and pneumonic lung tissues are similar with those of the previous study of Wesonga et al. ([@CR41]) who reported the lesions of classical CCPP caused by *Mccp*. These observations are also matched with the findings of others (OIE [@CR27]; Sadique et al. [@CR31]).
In conclusion, the present study revealed the prevalence of CCPP in the Borana pastoral area.
The causative agent of CCPP, *Mycoplasma capricolum* subspecies *capripneumoniae*, was identified and confirmed by PCR*.* The study also showed that sheep were infected with *Mccp* with a sero-prevalence of 12.9%. Based on our and previous studies, it is clear that CCPP represent a priority for goat farming and more coordinated efforts are needed to prevent the disease and mitigate its impact. In addition, further studies on economic impact of the disease on production performance of goats and studies focused on molecular characterization of the circulating strain for both sheep and goats using large sample size should be done.
The authors are grateful to National Animal Health Diagnostic and Investigation Center (NAHDIC) Sebeta, as well as to National Veterinary Institute, Bishoftu, for the technical assistance in the laboratory works. We also would like to acknowledge all of the pastoralists who generously participated in this study.
This study recieved financing from the International Livestock Research Institute and Oromia Agricultural Research Institute.
Ethical considerations {#FPar1}
======================
"Ethical clearance on the use of sheep and goats for this study was obtained from animal research ethics review committee of Addis Ababa University, College of Veterinary Medicine and Agriculture, before the start of this study. All procedures performed in studies involving animals were in accordance with the ethical standards of the institution. All applicable international, national, and/or institutional guidelines for the care and use of animals were followed." The owners of sheep and goats used in this study and the local administration were informed about the study and the owners revealed their consent in the presence of administrative bodies.
Competing interests {#FPar2}
===================
The authors declare that they have no competing interests.
|
26 Luxury Gray Damask Wallpaper
26 Luxury Gray Damask Wallpaper - You can see and find a picture of 26 Luxury Gray Damask Wallpaper with the best image quality at "Arteemark.com". Find out more about 26 Luxury Gray Damask Wallpaper which can make you become more happy.
Arteemark.com provides awesome collection of high definition 26 Luxury Gray Damask Wallpaper picture, image, photo and wallpaper. Download this 26 Luxury Gray Damask Wallpaper collection picture, image, photo and wallpaper for free that are delivered in high definition,
3636x3636 Pixel.
Browse 26 Luxury Gray Damask Wallpaper similar image, picture, wallpaper and photo in Home Design archive. 26 Luxury Gray Damask Wallpaper picture, image, photo or wallpaper has viewed by 10 users.
brown damask wallpaper teal roll wallpaper decor the home depot damask wall art new wall decal luxury 1 kirkland wall decor home elegant white victorian wallpaper and 6 damask wallpaper damask fabric damask pattern designer damask download new gray damask wallpaper elegant white victorian wallpaper and 6 63 good graph metallic damask wallpaper seafoam green wallpaper luxury although preppy wallpaper 0d – best gold kensington charcoal metallic wallpaper wallpaper
Patina Vie Zara Damask Wallpaper Beige from Gray Damask Wallpaper , source:pinterest.ch
damask light floral pattern for wallpapers and background vector henderson interiors chelsea glitter damask wallpaper soft grey patina vie zara damask wallpaper beige black painted bedroom black and white bedroom damask wallpaper 50 beautiful grey metallic wallpaper 50 awesome black and silver wallpaper abigail soft green wallpaper good to see tb devil damask fabric in black on grey on the print superfresco easy blue venetian wallpaper blue damask wallpaper if we can t be classy we can fake classy |
Spread the love
102 Shares
Cryptocurrencies or should we call it, digital money? The introduction of cryptocurrency has challenged the use of fiat currencies. It is the complete opposite of traditional money. Cryptocurrency was base on the idea of the blockchain technology which is known to create a digital ledger for all assets, whether tangible properties, money, stocks or vehicles. Several cryptocurrencies were designed and already launched to date. Bitcoin happens to be the most popular of all these cryptocurrencies.
The main hype behind using cryptocurrencies is the benefits that it provides. Carrying transactions with crypto is not only safe, but it is also fast. In other words, it provides you with all the benefits that traditional money cannot. It is because of this reason that many businesses and investors have shifted to the use of cryptocurrencies. However, it is still under development and has a long way to go.
If you are into using cryptocurrency, you should probably take a look at the text written below as we have discussed a unified cryptocurrency exchange project named JAVVY .
What is JAVVY?
JAVVY is a unified crypto exchange and wallet which is known to provide a common framework for buying and selling cryptocurrencies. Users of JAVVY will have the right to hold, convert or sell it to any other cryptocurrency they want. The idea of JAVVY has been conceived to deal with the problems that are faced by the cryptocurrencies out there.
Currently, the buying and selling of cryptocurrencies is a challenging task. This is mainly due to the fact that there is little to no cryptocurrency exchange is available that is reliable. Also, registering to a crypto exchange is hard because of the anti-money laundering laws. In addition, even the crypto wallets are not good enough. They have limited functionality and also basic. All these problems are taken care of by JAVVY.
The main aim of JAVVY is to provide an intuitive and easy cryptocurrency wallet. It aims to create a crypto wallet that is fully functional and rich in feature. The JAVVY team is dedicated to developing a crypto wallet for all so that everyone has the liberty to buy and sell cryptocurrencies without any issue. They aim to create a wallet that can support many cryptocurrencies and also support various payment methods. Not just that, but they are also going to prioritize security above all. Users will also be able to convert their cryptocurrencies.
Features
JAVVY is undoubtedly the best crypto wallet and exchange that has been developed by its team. What adds to its excellence are the features. The wallet is loaded with features which makes it the most reliable and best crypto wallet and exchange out there. Users can also avail a number of benefits by using the JAVVY wallet.
The best thing about JAVVY is that it has been available on all platforms. It is supported by Windows, Linux, MAC, Android, and even IOS. Users can download JAVVY from the app store or its website. Not just that, but it is also available in different languages as well. The wallet has got a user-friendly interface along with excellent security options. With the help of this crypto wallet, users can hold numerous cryptocurrencies. Another great feature of the wallet is that it allows you to use cryptocurrencies like cash at any location that accepts a debit card. You can even convert your cryptocurrency instantly. The wallet also provides ICO/STO support to its users. You can also receive payments and mining rewards directly in your JAVVY wallet. The wallet is based on a decentralized platform. It doesn’t hold any of your private keys. The wallet also comes with variable levels of security which makes it reliable to use by the users. The wallet has also got robust login features along with an additional layer of security features. The wallet also provides you with the option for multiple signatures.
Considering all these features, it is clear that the JAVVY team is focused on making the JAVVY crypto wallet and exchange for being the best in the market. No other crypto wallet or exchange comes with this many features. Also, the good thing about the wallet is that it has an excellent security.
Tokenization
JVY is an ERC-20 utility token of the Javvy platform which is built on Ethereum blockchain. The utility token possesses two primary use cases. First, the JVY token can be used to pay fees within the JAVVY exchange and wallet, and users receive a 50% discount on their transaction fees.
Another alternate use case is JVY token staking where users can opt for a fixed percentage of interest in the form of additional JAVVY tokens.
Team Members
JAVVY is supported by a dedicated team who are determined to make the crypto wallet and exchange, the best in the world. The core members of the team include:
Brandon Elliott: He is the founder and the CEO of the company. Frank Grogan: He is the co-founder and chief marketing officer of the company.
Other than them, the team comprises of other talented members who equally contribute to the development of this unique crypto wallet and exchange. JAVVY wouldn’t reach such a position without the joint effort of all the members of the team.
Conclusion
To conclude, it can be stated that JAVVY is one of the most reliable crypto exchange and wallet that has been introduced to cryptocurrency users. The wallet has high-end security features along with many other useful features. All these features together contribute to the greatness of the wallet. The wallet is supported on all the major platforms. Not only that, but it also supports many types of cryptocurrencies and also provides you with the option of converting your cryptocurrencies instantly. In addition, you can also use your cryptocurrency like cash at any given location that accepts a debit card.
To learn more, visit their official website or download their whitepaper.
Article Creator: cryptomagis |
PIERRE, S.D.(Press Release)- A solution in search of a problem – that’s what Gov. Kristi Noem’s proposed legislation banning transgender women and girls from competing on the sports teams that match their gender identity and forbids their participation in both high school and collegiate athletic activities amounts to.
The ACLU of South Dakota opposes Noem’s proposed legislation, an attack on transgender women and girls. The draft legislation violates both the United States Constitution and Title IX of the Civil Rights Act, which protects all students – including those who are transgender – from discrimination based on sex.
“Gov. Noem’s proposed legislation is clearly fueled by a fear and misunderstanding of transgender people in our state,” said Jett Jonelis, ACLU of South Dakota advocacy manager. “Gov. Noem says.”
If Noem’s proposed legislation is any indication of what’s to come during the 2022 legislative session, discriminatory rhetoric will again take precedence over issues that South Dakotans really care about.
“With serious issues like education funding and tenuous state-tribal relations, it’s disturbing that we keep coming back to the same discriminatory issues year after year,” Jonelis said. “Nobody wins when politicians try to meddle in people’s lives like this. Nobody wins when we try to codify discrimination like this. Legislation like Noem’s proposed bill has been discussed and defeated before. It’s time to move on.”
South Dakotans agree.
A 2020 Human Rights Campaign survey found that 69 percent of South Dakota voters say that “legislators are too focused on divisive issues and should be focusing on pressing issues that will actually have an impact on South Dakotans, like growing the economy.” Additionally, nearly two-thirds of voters say, “we need to stop stigmatizing transgender people as a society.”
This is the eighth attempt by South Dakota lawmakers to prevent transgender athletes from competing. After the SDHSAA enacted its inclusive transgender sports policy, lawmakers tried to meddle with the association’s authority, first with House Bill 1161 in 2015 and then with House Bill 1111 in 2016. Four additional bills – House Bill 1195 in 2015, House Bill 1112 in 2016 and Senate Bill 49 and House Bill 1225 in 2019 – would have restricted participation in high school athletic activities to the gender listed on a person’s birth certificate. All bills were killed. Last year, lawmakers failed to override Gov. Noem’s veto of House Bill 1217. |
It may not have been the anticipated signing of the summer, but the Dallas Cowboys and linebacker Jaylon Smith have agreed to a 5-year contract extension which will pay the fourth-year star $64 million with $35.5 million guaranteed. Smith was scheduled to make just over $1.3 million in base salary in 2019 and count a little over $2 million against the salary cap.
Smith’s new average of just under $13 million is for far less than the contracts inked by C.J. Moseley and Bobby Wagner earlier in the offseason, setting the market at $17 million and $18 million respectively.
“Not knowing if I would get drafted, not knowing if I would play football again. A lot of people didn’t know. I knew,” said Smith in announcing the extension. “It’s a long term thing. For me it’s about being a Dallas Cowboy for life.”
Smith started all 16 games for the Cowboys in 2018, logging 121 combined tackles as their newly installed middle linebacker, and didn’t disappoint. He also notched four sacks, four pass deflections and two forced fumbles, including a 69-yard touchdown return that was a video homage to his inspiring journey, with brother Rod Smith racing down the sidelines in pure pride and euphoria.
After a scary collegiate injury threatened to derail a promising career, Smith’s faith and determination allowed him to not only resume it, but return to the physical excellence that was his calling card before blowing out his knee. And while the team has failed to get in front of contract extensions for their other young stars, they certainly aren’t making the same mistake with Smith.
The Cowboys front office has had a busy offseason, working on new contracts for numerous stars whose contracts are set to expire either at the end of the 2019 or 2020 seasons. Smith fell right in the middle of those two categories. Technically his four-year contract would end after 2019, but because he didn’t play as a rookie, this would be just his third season towards free agency. Without the extension, Smith would have been a restricted free agent (RFA).
RFAs are allowed to negotiate on the open market, but the original team (provided they extended an offer known as a tender) would have a right of first refusal. Depending on which of the three financial levels of the tender, the Cowboys could receive draft-pick compensation if they allowed Smith to leave. That wasn’t very likely.
For all of the recent discussion about whether Jerry Jones and the Cowboys organization have gone above and beyond in support of running back Ezekiel Elliott, there is no question when it comes to Smith. The former Notre Dame star was destined for a top-10 pick in the 2016 draft, and possibly a top five one when he blew out his knee in a January bowl game. The injury was devastating, not only tearing multiple ligaments but also causing severe nerve damage.
“He has never wavered, he has never complained, he has never missed a workout, he has never quit… His story is one I would have done anything to make him a Dallas Cowboy,” owner Jerry Jones said in announcing the extension. “I will assure you, this was about team. Our goals are to build the best team, and it takes cooperation from both parties. He is worth 1,000 words of anything we can say about contracts and the Dallas Cowboys.”
Many wondered if Smith would ever play again, but the Cowboys — whose team doctor performed the resconstructive surgery — believed in the prognosis and the player, taking him with the 34th overall selection. After redshirting his rookie season, Smith’s biggest concern remained the drop-foot condition; a result of the nerve damage that kept him from being able to keep a raised foot parallel to the ground without the toes drooping.
Smith’s giddy up was hindered and he had major physical limits in 2017 when he started six games, but did appear in all 16. He was forced into a bigger role because of other injuries at the position, but the work led to peeling off the rust and getting clear of the knee and foot impediments.
When he hit the field in 2018, Smith was dynamic and it resulted in outstanding play. He ascended to being one of the game’s top linebackersand his arrow is still pointing upwards, leading the Cowboys to want to lock him in as both a reward and an insurance against rising premiums.
Part of the reason Smith’s deal happened before others was for the amount of money he was willing to accept. The deal doesn’t set the market for linebackers, it isn’t even in the top 5.
Smith made it clear during training camp, he was more focused on securing his status as a lifelong member of the Cowboys.
Smith is the second major defensive star Dallas has signed to a long-term deal this offseason, joining defensive end DeMarcus Ware who was inked to a five-year, $105 million deal in the spring. Smith has been part of what may eventually be considered the greatest draft class in team history. Also selected in 2016 were Elliott, quarterback Dak Prescott, defensive tackle Maliek Collins and cornerback Anthony Brown. All five are projected to start in 2019. |
(Photo : RSAF) Saudi Arabia's Saqr 1 UAV.
Saqr 1, Saudi Arabia's first indigenous medium-altitude, long-endurance (MALE) unmanned aerial vehicle (UAV), shows strong Chinese influence in its design and is armed only with Chinese guided missiles and smart bombs.
The UAV will see action against Iranian-backed Houthi rebels in Yemen fighting against a Saudi-led coalition of Sunni Arab nations.
Like Us on Facebook
Saqr 1 was first revealed to the public a few days ago at the King Abdulaziz City for Science and Technology (KACST) in Riyadh. KACST President Prince Turki bin Saud bin Mohammed said Saqr 1 is equipped with a Ka satellite communication system and with thermal imaging and surveillance technology for weapon guidance.
The UAV has a range of 2,500 km and can reach an average altitude of over 6,000 meters. It can fly non-stop for 24 hours powered by its Rotax 914 (95 hp) engine. Saqr 1 is 9.2 meters in length and has a wingspan of 18 meters.
In appearance, Saqr 1 resembles China's CH-4 (Cai Hong-4) UAV operated by the Royal Saudi Air Force.
Like Saqr 1, CH-4 is also a MALE UAV and both are armed with the same Chinese-made munitions.
These weapons are the AR-1 semi-active laser-guided missile and the FT-9 guided bomb. AR-1 is used against tanks and armored vehicles, and can also penetrate buildings. It can pierce 1,000 mm of armor plate.
AR-1 has an effective range of eight kilometers and a circular error probable (CEP) of only 1.5 meters.
On the other hand, the FT-9 smart bomb weighs 50 kg, and is guided to its target by a combination GPS/INS system. These weapons are mounted on two hardpoints capable of a combined payload weighing 250 kg.
CH-4 looks remarkably similar to the U.S. MQ-9 Reaper UAV built by General Atomics. Some American experts believe the striking resemblance is another example of Chinese spying put to use in the real world.
The reason for the CH-4's popularity in the Middle East and Africa is its price: a CH-4 costs only $1 million compared to the price tag of $30 million for the MQ-9 Reaper.
TagsSaqr 1, Saudi Arabia, medium-altitude, Houthi Rebels, yemen, King Abdulaziz City for Science and Technology, Prince Turki bin Saud bin Mohammed, CH-4 (Cai Hong-4) UAV, AR-1 semi-active laser-guided missile, FT-9 guided bomb |
We have been fielding a lot of questions about CPO’s next version that is due out very soon. The work is still fast and furious here with a private beta release date of March 31 looming large. There are a few things that current CreativePro Office users should know as we finally enter the release phase of this long project.
We have decided to do a private beta release of CreativePro Office 2.0 on March 31 with a public release shortly thereafter.
This is an honest question and the answers are many. Mainly, the scope of work was much larger than anticipated. In retrospect, we may have tried to cram too many features into the initial 2.0 release. But, since version 1.0 has a pretty complete feature set, we knew we had to match and exceed that benchmark which I believe we have done.
Too many to list here but the ones we’re most excited about are:
Yes and no. CreativePro Office 2.0 will have a free option among several premium, paid options. We will release the official pricing structure very soon.
No. We are allowing a 6 month grace period after launch for existing users to migrate to a free or paid plan on version 2.0. If you decide that you do not wish to continue using CPO we will provide a data extract feature so that you can get your data out of the system and migrate it to another service.
No. Version 1.0 will not accept any new subscriptions after April 15. Existing users of version 1.0 will have 6 months from the public release date to migrate over to a CreativePro Office 2.0 subscription.
Yes, but it will be available after the public hosted release. The reason is that we have to do some tweaking to the self-hosted version to insure a smooth installation experience for the widest variety of users.
No, the price point of version 2.0 will be higher. We will release official prices within a week.
CPO has been developed to be as intuitive as possible to use. However, we realize that support in many forms is necessary for a satisfied user base. The forum and priority email (for subscribers) will be the primary means of helping our users find their way around CreativePro Office 2.0. Tutorials and help files will be built out as we see a need. For example, if we see that many users are having trouble with file uploads; we’ll create some help content around that specific issue. We are definitely going to create some comprehensive help content for the self-hosted, downloadable version though.
We have long fielded the request for different language versions of CPO. Our plans are to include over 10 translations in version 2.0. We already have the translations more or less sketched out but we will be working as much as we can with native speakers to fine tune the translations.
No but we have added over 45 currencies to version 2.0 and will add more at your request.
Our plan for CreativePro Office is to continue to integrate more and more features over time as this application becomes profitable. While we might not be able to give you a specific date for a specific feature roll out, be confident that we have many improvements in store for the future.
Thanks for reading. Please let us know if you have any other questions.
Dear trusty, faithful users and new users alike,
Despite periodic silence, work still ticks on here in development of 2.0. The silence here can be easily explained: development is to be a process that is, in our case, more often just done and less often talked about. To put it succinctly, we are working.
On our part there has been a little bit of dismay over watching the date we intended to release come and go. In reflection on what is causing the delay, Jeff realized a few things:
1. The scope of work for version 2.0 has increased significantly since it was originally conceived. Originally, 2.0 was to be a rewrite of the code so that future added features would have a stable code-base. But what happened was that while in the code, tinkering, it seemed more logical (and more interesting) to start the process of making some of the more major improvements that were slated for the future.
2. The amount of anticipated collaboration for the task of version 2.0 decreased by about 66%. A smaller team made more work to be done.
3. The hosting situation caused a bit of a slowdown for awhile but has since been rectified.
As we saw the date of anticipated release approaching, there were signs that things weren't going to work out as planned. The release date was too ambitious, based on a "best case scenario" when excitement was high, and the scope of work was smaller.
Now, a year after beginning, Jeff is just pushing for that next release date. There had been considerations of releasing with what we had completed, but that would have fallen short on what we had told users to anticipate. We are setting the date to end of February 2010, which seems entirely reasonable since version 2.0 is 80% complete.
The February release is planned to be beta tested and ready for use.
Thank you for your patience and hope that you will continue to looking forward to this as much as we are!
If we haven't said so before, a big THANK YOU to all the buyers of the CPO Source Code. Your purchases are lighting the fire under our pants to keep improving CreativePro Office daily.
There are source code patches available to amend some of the bugs that our much appreciated bug testers have found.
The link is here:
We are currently working on CreativePro Office 2.0 that we hope will be available by the end of March and possibly earlier. We are excited about this release as it will be the "new improved" version with more features that users have been requesting and numerous bug fixes. |
- Research Article
- Open Access
- Published:
Bayesian neural networks with variable selection for prediction of genotypic values
Genetics Selection Evolution volume 52, Article number: 26 (2020)
2014 Accesses
8 Altmetric
-
Abstract
Background
Estimating the genetic component of a complex phenotype is a complicated problem, mainly because there are many allele effects to estimate from a limited number of phenotypes. In spite of this difficulty, linear methods with variable selection have been able to give good predictions of additive effects of individuals. However, prediction of non-additive genetic effects is challenging with the usual prediction methods. In machine learning, non-additive relations between inputs can be modeled with neural networks. We developed a novel method (NetSparse) that uses Bayesian neural networks with variable selection for the prediction of genotypic values of individuals, including non-additive genetic effects.
Results
We simulated several populations with different phenotypic models and compared NetSparse to genomic best linear unbiased prediction (GBLUP), BayesB, their dominance variants, and an additive by additive method. We found that when the number of QTL was relatively small (10 or 100), NetSparse had 2 to 28 percentage points higher accuracy than the reference methods. For scenarios that included dominance or epistatic effects, NetSparse had 0.0 to 3.9 percentage points higher accuracy for predicting phenotypes than the reference methods, except in scenarios with extreme overdominance, for which reference methods that explicitly model dominance had 6 percentage points higher accuracy than NetSparse.
Conclusions
Bayesian neural networks with variable selection are promising for prediction of the genetic component of complex traits in animal breeding, and their performance is robust across different genetic models. However, their large computational costs can hinder their use in practice.
Background
The biochemical mechanisms that underlie phenotypes work through non-linear interactions between molecules and proteins. Nevertheless, in the practice of animal breeding, additive prediction methods, which assume that phenotypes depend on markers individually and without interactions between them, have been successful. The U-shaped allele frequency of causal loci explains why these microscopic interactions give rise to traits mainly due to additive genetic variance [1], and therefore the success of additive methods. However, traits still have an epistatic component and therefore methods that can fit more than the additive genetic component have the potential to better predict genotypic values and phenotypes of animals.
In reality, the causal variants of a trait are not necessarily among the markers that are used for genomic prediction. Therefore, not all markers may aid in the prediction of genetic values. Prediction methods may therefore be optimized if they allow to model a proportion \(\pi\) of the total number of markers as irrelevant for phenotype prediction. Additive methods such as BayesB [2] and BayesC\(\pi\) [3] allow for this variable selection of markers (sparsity). Depending on the genetic architecture of the trait, additive Bayesian variable selection methods may have a small advantage over genomic best linear unbiased prediction (GBLUP) [4,5,6].
Parametric methods assume a specific functional form. Members of this family of methods are the additive and dominance methods of quantitative genetics. In addition, parametric methods assume that genetic variance can be decomposed orthogonally into additive, dominance, additive \(\times\) additive, dominance \(\times\) additive etc. variance components. This orthogonal decomposition is valid only under restricted assumptions such as linkage equilibrium, random mating and no inbreeding [7]. Since these assumptions are invalid in practice, prediction methods that do not make them have the potential to obtain better predictive performance of traits than methods that do.
Non-parametric methods assume neither a particular form of the unknown relation from genetic material to genotypic values, nor the aforementioned partitioning of genetic variance. Because these models do not distinguish between the different variance components, they predict genotypic values instead of breeding values, where breeding values correspond only to the additive component of genotypic values. In genetics, reproducing kernel Hilbert space regression [8] and neural networks have been studied as non-parametric methods for the prediction of phenotypes. Neural networks, in particular, are powerful and interesting non-parametric methods because they can approximate any function, and current software packages make them easy to use.
Neural network models that have been investigated in animal breeding include Bayesian regularized artificial neural network (BRANN) [9, 10], scaled conjugate gradient artificial neural network (SCGANN) [10] and approximate Bayesian neural network [11] methods. BRANN is a neural network method that avoids overfitting by means of Bayesian regularization via Bayesian prior distributions, and achieved higher accuracy than additive methods for prediction of milk production in Jersey cows. For marbling score in Angus cattle, BRANN had a higher predictive accuracy than both additive methods and SCGANN, which is a neural network method without Bayesian regularization. These results imply that Bayesian regularization can have a benefit for prediction of traits. BRANN has been used for the ranking of markers based on their impact on the network. [12] While this approach can help to identify the most important markers, it does not promote sparsity during inference since the ranking is performed as a separate step after inference.
There are several neural network methods that try to achieve sparsity with different regularizations on the weights (parameters of the neural network). For example, \(\ell _1\) (Lasso) regularization causes as many weights as possible to reach zero and has previously been studied in animal breeding [13], and group Lasso, which allows for pruning weights in groups instead of individually, has been studied for image classification [14]. Sparsity based on an \(\ell _0\) regularization for individual weights has also been studied for image classification [15]. These approaches are based on Maximum a Posteriori inference, and typically focus on sparsity of all network nodes or weights, rather than sparsity of inputs (markers) specifically.
In summary, there are phenotype prediction methods that allow for variable selection of markers and there are non-parametric methods that allow for fitting non-additive effects of individuals. However, there are no methods that allow for both. In this study, we introduce a method called NetSparse to fill the previously unexplored combination of both Bayesian variable selection and Bayesian neural networks for prediction of total genotypic values.
In the section Methods, the framework for Bayesian phenotype prediction is set and we explain how both additive Bayesian methods and NetSparse fit within this framework. The section Simulations describes the simulation of data used to compare NetSparse with other methods. In the section Results, we compare NetSparse to the reference methods GBLUP, BayesB, GBLUP-AD, BayesB-AD and GBLUP-ADAA.
Methods
First, we set up a general Bayesian framework for phenotype prediction methods, then we will describe how the different methods considered fit in this framework.
Bayesian phenotype prediction
We assume that N individuals \((i=1,\ldots ,N)\) are phenotyped and genotyped, such that the observed phenotype \(y_i\) is the sum of a genotypic value \(g_i\) and a residual \(e_i\). In addition, we assume that the genotypic value of individuals can be computed from their marker genotypes by a function \(f(\cdot ; \mathbf {u})\), depending on unknown parameters \(\mathbf {u}\) (different for each method), and that the residuals \(e_i\) follow a normal distribution with mean 0 and variance \(\sigma _e^2\), which we write as \(e_i\sim \mathcal {N}\left( 0,\sigma _e^2\right)\). These assumptions lead to the following model:
where \(\mathbf {x}_i=\left( x_i^1\,x_i^2\,\ldots \,x_i^P\right) ^{\intercal }\) is the vector with marker genotypes of individual i, the exact encoding of which depends on the method. We gather the vectors in the matrix \(\mathbf {X}=\left( \mathbf {x}_1\,\mathbf {x}_2\,\cdots \,\mathbf {x}_N\right)\). Model (1) implies that the likelihood of the data is:
The likelihood is combined with a prior distribution \(p\left( \mathbf {u}\mid \eta \right)\), depending on hyperparameters \(\eta\), which indicates what a priori are considered to be plausible values of \(\mathbf {u}\) that could have generated our data.
In our terminology, we use the term “parameter” for \(\mathbf {u}\), these are variables which directly determine the model predictions. The term “hyperparameter” is used for \(\eta\), these are the parameters which only indirectly influence the predictions of the model. The hyperparameters \(\eta\) and \(\sigma _e^2\) can crucially influence the performance of a model. There are roughly two approaches for these hyperparameters: they can be either estimated or integrated out. For the GBLUP models in particular, where the hyperparameters are the variance components,Footnote 1 this estimation can be done for instance with restricted maximum likelihood (REML) [16]. Alternatively, if estimation of these hyperparameters is difficult, they can be given prior distributions and integrated out together with the other parameters via Markov chain Monte Carlo sampling (“MCMC sampling” section).
The joint distribution of \(\mathbf {y}\) and \(\mathbf {u}\) conditioned on \(\mathbf {X}\) is the product of the likelihood and prior distribution:
The posterior distribution \(p\left( \mathbf {u}\mid \mathbf {X},\mathbf {y}\right)\) via Bayes’ Theorem is then:
This posterior distribution is the distribution over \(\mathbf {u}\) obtained by combining the information in the prior distribution and the information in the observed data.
Model (1) can be used to make phenotype predictions \(y_*\) for an individual with markers \(\mathbf {x}_*\) by computing the posterior predictive distribution:
The expected value of \(y_*\) with respect to the posterior predictive distribution is:
Bayesian inference
Using the previous framework, we will briefly describe the five methods that we used for reference (GBLUP, BayesB, GBLUP-AD, BayesB-AD and GBLUP-ADAA), as well as our new method, NetSparse. Of these methods, GBLUP [17] was chosen because it is the most used in practice. We chose BayesB [2] because it is a common method that includes variable selection, similar to NetSparse.
GBLUP
The SNP-BLUP model is an additive model where each marker p is assigned an additive effect \(a_p\) with shared prior variance \(\sigma _a^2\). Specifically, in SNP-BLUP, f is chosen as a linear function \(f_{\text {SNP-BLUP}}\left( \mathbf {x}; \mathbf {u}\right) = \mathbf {x}^{\intercal }\mathbf {a}\), with \(\mathbf {u} =\mathbf {a}\). The prior distribution over \(\mathbf {a}\) is \(p\left( \bf {a}\mid \sigma _a^2\right) = {\mathcal {N}}\left( \bf {a}\mid \bf {0},\sigma _a^2\bf {1}\right)\). The hyperparameters \(\sigma _e^2, \sigma _a^2\) are estimated with REML.
The posterior distribution over \(\mathbf {a}\) is:
where \(\mathbf {\Sigma }^{-1}=\sigma _e^2 \mathbf {1}+ \sigma _a^{2}\mathbf {X}^{\intercal }\mathbf {X}\). The matrix \(\mathbf {X}^{\intercal } \mathbf {X}\) is proportional to the additive genomic relationship matrix \(\mathbf {G}\). The posterior predictive distribution is also Gaussian:
This equivalent formulation of SNP-BLUP is called GBLUP and uses the additive relationship matrix \(\mathbf {G}\) instead of allele effects, based on markers. For derivations of these formulas, see for instance [18]. We used the GBLUP implementation of MTG2 [19].
BayesB
The BayesB model, like SNP-BLUP, is an additive model where every marker is assigned an additive effect [2]. However, contrary to SNP-BLUP, BayesB also includes marker selection, which we will indicate by a (binary) marker selection vector \(\mathbf {s}\in \{0,1\}^P\). If an entry in this marker selection vector has the value 1, the corresponding marker is selected for inclusion in the model. If the entry is equal to 0 the marker is not selected. If the posterior distribution is concentrated at \(s^p=1\), then the p-th marker contributes significantly to phenotype prediction, while if most of the probability is concentrated at \(s^p=0\) then the p-th marker does not contribute significantly to phenotype prediction. In addition, there is a hyperparameter \(\pi\) that is equal to the proportion of non-contributing markers, i.e. \(\pi =1-\sum _p s^p/P\). SNP-BLUP, in contrast, assumes that all additive effects come from the same normal distribution, making it similar, but not exactly equal, to BayesB with \(\pi =0\). If a dataset contains relatively few quantitative trait loci (QTL), this mismatch should result in BayesB having a better performance than GBLUP.
Specifically, in BayesB, the function f is \(f_{\text {BayesB}}\left( \mathbf {x}; \mathbf {u}\right) = \mathbf {a}^{\intercal } \left( \mathbf {x}\odot \mathbf {s}\right)\), with \(\mathbf {u}=\left( \mathbf {a}, \mathbf {s}\right)\) and \(\odot\) the element-wise (Hadamard) product \(\left( \mathbf {x}\odot \mathbf {s}\right) ^p = x^ps^p\). For each marker p, marker effect \(a_p\) has prior distribution \(p\left( a_p\mid \sigma _{a_p}^2\right) =\mathcal {N}\left( a_p\mid 0,\sigma _{a_p}^2\right)\) and hyperprior distribution \(p\left( \sigma _{a_p}^2\right) =\chi ^{-2}\left( \sigma _{a_p}^2\mid \text {df}=5, S=\frac{3}{5}\right)\), \(p\left( s^p\mid \pi \right) = \pi ^{1-s^p}(1-\pi )^{s^p}\), and \(p(\pi )\,=\,\text {Unif}(\pi \mid 0,1)\). The expression for the posterior distribution over \(\mathbf {u}\) is:
and the expected value of the posterior predictive distribution isFootnote 2:
The last line comes from the identity \(\mathbf {a}^{\intercal }\left( \mathbf {x}_*\odot \mathbf {s}\right) =\left( \mathbf {a}\odot \mathbf {s}\right) ^{\intercal }\mathbf {x}_*\) and means that prediction can be obtained by averaging allele effects and then making predictions using those, instead of averaging predictions directly. The expectation value cannot be computed analytically, but it can be approximated by sampling (“MCMC sampling” section).
AD methods
The aforementioned additive methods can be adapted to fit additive and dominance effects. For the additive effects, markers were encoded as:
and for the dominance effects, markers were encoded as:
Note that for the GBLUP implementation and assuming Hardy-Weinberg equilibrium (HWE), the use of these encodings leads to additive and dominance relationship matrices as described in [20]. For the BayesB implementation, the two encodings are appended, such that every individual is represented by an array twice as long as for the additive models. Using GBLUP and BayesB with these longer arrays allows dominance to be fitted as well and we call the resulting methods GBLUP-AD and BayesB-AD [21, 22]. Because they explicitly model additive and dominance effects, these methods should work best on data where both additive variance and dominance variance are significant.
GBLUP-ADAA
The AD construction for GBLUP can be extended further to fit additive by additive epistasis (section Simulations), in addition to additive and dominance effects, by adding a third covariance matrix, given by \(\mathbf {G}\odot \mathbf {G}\). We call this method GBLUP-ADAA. As with GBLUP, the MTG2 software was also used for GBLUP-AD and GBLUP-ADAA.
NetSparse
In our NetSparse model (Fig. 1), f is chosen as a neural network with one hidden layerFootnote 3:
with \(\mathbf {u}=\left( \mathbf {W},\mathbf {w}, \mathbf {b}^h, b^o, \mathbf {s}\right)\). \(f_{\text {NetSparse}}\) is the output of the entire network, which depends on \(\mathbf {h}(\mathbf {x})\), which is called the hidden layer. The vector \(\mathbf {s}\) is a marker selection vector, like in BayesB. The parameters \(\mathbf {W}\in {\mathbb {R}}^{H\times P}\) and \(\mathbf {w}\in {\mathbb {R}}^H\) are called the weights, \(\mathbf {b}^h\in {\mathbb {R}}^H\) and \(b^o\) are called the biases. Parameter H is the number of hidden units and by increasing it, the neural network has more capacity to fit non-additive effects. In this study, as is typical for prediction of continuous outcomes, the output activation function g was chosen as the identity. For classification, a different transfer function, such as softmax, would be more appropriate, but such analyses fall outside the scope of this study. Given that the computational resources are sufficient, one would determine H via a cross-validation procedure. However, we did not have access to such resources, thus we used \(H=20\), such that the model was able to fit complex non-linear interactions within reasonable computation time. A value of H larger than 20 led to an impractical increase in computation time.
NetSparse Schematic neural network representation of NetSparse (9). The input \(\mathbf {x}\) to the neural network is on the left, the output g is on the right. \(\mathbf {s}\) is the variable selection vector, \(\mathbf {W}\) and \(\mathbf {w}\) are the weights, \(\mathbf {b}^h\) and \(b^o\) are the biases. At the third layer of nodes, \(\tanh\) is applied to the sum of the incoming values
A neural network can be interpreted as repeatedly taking linear combinations and elementwise application of an activation function (\(\tanh\)). Given \(\mathbf {x}\odot \mathbf {s}\), the value of \(h_i\) depends linearly on the j-th column of \(\mathbf {W}\), and given each \(\tanh \left( h_i\right)\), the output of the network depends linearly on \(\mathbf {w}\). The non-linearity from \(\tanh\) makes sure that the neural network can fit more than additive relations; if \(\tanh\), as well as g, was replaced by the identity function the network would only be able to fit linear functions.
Some of the prior distributions are:Footnote 4
where \(\left| \mathcal {N}\left( 0,2^2\right) \right|\) denotes the half-normal distribution with scale parameter \(2^2\), which is the same as a normal distribution with standard deviation 2 restricted to positive values only, and \(\beta\) is the precision parameter in the (Gaussian) likelihood. The prior distribution over \(\mathbf {s}\) is the same as that of BayesB. The posterior distribution over \(\mathbf {u}\) in NetSparse (2) is:
As with BayesB, this expression can not be computed analytically, but it can be approximated by sampling.
MCMC sampling
The integral in (3) can be computed analytically for GBLUP, but not for BayesB (5) and NetSparse (11). To obtain an approximation to \(\mathbb {E}\left[ y_*\right]\) for these models, we do MCMC sampling to obtain samples from the joint posterior distribution over \(\left( \mathbf {u},\eta \right)\). Given such samples \(\left( \left( \mathbf {u}_1,\eta _1\right) , \left( \mathbf {u}_2,\eta _2\right) , \ldots , \left( \mathbf {u}_T,\eta _T\right) \right)\), the expectation value of \(y_*\) can be estimated as:
For BayesB, we implemented a Gibbs sampler in the BGLR R package [23]. Instead of averaging predictions, the sampler averages allele effects, but this is equivalent (see (6)).
For NetSparse, we used the PyMC3 package [24] to sample from the NetSparse posterior distribution, \(p\left( \mathbf {u},\eta |\mathbf {X},\mathbf {y}\right)\), which is the integrand of (11). The conditional distributions cannot be sampled from, directly, so a Gibbs sampler cannot be used, therefore PyMC3 uses a composite sampler, which alternatively uses MCMC samplers for the discrete (\(s^p\)) and for the continuous variables (the rest). To sample \(\mathbf {s}\), we used a Metropolis-Hastings algorithm, where we iterate over the individual components in a random order. For each \(s^p\), we evaluate \(P_1=p\left( s^p\mid \text {rest}\right)\) and \(P_2=p\left( 1-s^{p}\mid \text {rest}\right)\), then we set \(s^{p}\leftarrow 1-s^{p}\) with probability \(\min (1,P_2/P_1)\).
For the continuous parameters, we have the conditional posterior distribution \(p\left( \theta \mid \mathbf {X},\mathbf {y},\mathbf {s}\right)\), where we write \(\theta\) for the combination of all continuous variables: \(\mathbf {W},\mathbf {w}, \mathbf {b}^h, b^o,\) and \(\eta\). This conditional posterior distribution is the integrand of (11). To sample these parameters, we used the Hamiltonian Monte Carlo sampler (HMC) [25, 26]. HMC uses the same Metropolis-Hastings procedure as for \(\mathbf {s}\), but with a more complicated proposal. To generate a proposal, initialize \(\mathbf {\theta }(0)\leftarrow \mathbf {\theta }\), and for each \(\theta _i\) draw a new variable \(r_i(0)\) from a normal distribution and compute the energy \(E_0=H\left( \mathbf {\theta }(0), \mathbf {r}(0)\right) =\left\| \mathbf {r}(0)\right\| ^2/2-\log p\left( \mathbf {\theta }(0)\mid \text {rest}\right)\). Given this initial state, generate a proposal state from \((\mathbf {\theta }(0), \mathbf {r}(0))\) by numerically evolving it for a time T according to the Hamiltonian dynamics:
This new state \(\left( \mathbf {\theta }(T),\mathbf {r}(T)\right)\) will have energy \(E_T=H\left( \mathbf {\theta }(T),\mathbf {r}(T)\right)\). This proposal is evaluated with a Metropolis-Hastings acceptance criterion: set \(\mathbf {\theta }\leftarrow \mathbf {\theta }(T)\) with probability \(\min \left( 1,\exp \left( E_0-E_T\right) \right)\), otherwise \(\mathbf {\theta }\leftarrow \mathbf {\theta }(0)\). The \(r_i\) are discarded. We note that only the gradient of the posterior distribution is required, but not the matrix of second derivatives.
We used the NUTS variant of HMC [27]. For high-dimensional models with continuous variables, using the gradient of the posterior distribution allows HMC to explore the parameter space faster than either Metropolis-Hastings or Gibbs [28] samplers [29], and therefore requires fewer sampler steps.
Besides computing the posterior distribution, simulation of the Hamiltonian dynamics also requires the gradient of the posterior distribution. PyMC3 calculates this gradient by the automatic differentiation capabilities of Theano [30].
We drew four independent chains of 1000 samples each, where for each chain the first 500 samples were used to tune the sampler and discarded, the last 500 samples of each chain were used for predictions. We also ran a few longer chains, but this did not change the results.
Simulations
To compare the performance of these methods, we evaluated them on populations in which the traits have different phenotypic models (additive, dominance and epistatic).
Population structure
Our aim was to simulate a population with a family structure and linkage disequilibrium pattern that roughly resemble those of livestock populations, using QMSim [31]. The historical population was simulated by mating 250 males with 250 females for 1900 generations to reach mutation-drift equilibrium. To mimick breed formation, a bottleneck was introduced by gradually decreasing the population size to 75 males and 75 females during the next five generations. This population size was maintained for 95 generations, and, then, population size was increased to 1050 (50 males and 1000 females) in the last historical generation. From the last historical generation, all males and females were randomly mated for 15 generations to create the current population. Litter size in the current population was 10, and at each generation all sires and dams were replaced to create non-overlapping generations. For all scenarios, the reference population consisted of 500 randomly sampled individuals from generation 14, and the validation population consisted of 2000 randomly sampled individuals from generation 15.
Genome
The genome consisted of 10 chromosomes, of 100 cM each. For each chromosome, 40 000 biallelic loci were simulated. Mutation rate in the historical generations was \(2.5\cdot 10^{-6}\), and there was no mutation in the last 15 generations. From all loci segregating in generation 14, m loci were selected to become QTL, which varied across scenarios, and 5000 loci were selected to become markers. Although this density is lower than a typical commercial livestock SNP chip (60K), we chose this lower density to decrease computational demand. The markers were selected based on their allele frequency; the allele frequency distribution of markers was approximately uniform. The QTL were randomly selected and the allele frequency distribution of QTL was approximately U-shaped.
QTL effects
Additive effects (a) of QTL were sampled from a normal distribution with mean 0 and variance 1. Dominance factors (\(\delta\)) were also sampled from a normal distribution, with varying mean and variance across scenarios. Dominance effects (d) were computed as \(\delta \left| a\right|\) [32, 33]. Similar to dominance effects, we assumed that the magnitude of epistatic effects were proportional to the additive effects of the interacting QTL. For all \(m(m-1)/2\) pairwise combinations of QTL, epistatic factors (\(\gamma\)) were sampled from a normal distribution with mean 0 and variance 1. The epistatic effects (\(\epsilon\)) between QTL k and l were computed as \(\gamma \sqrt{\left| a_k a_l\right| }\).
Breeding values, dominance deviations, epistatic deviations, and phenotypes
Breeding values (\(\mathbf {A}\)) and dominance deviations (\(\mathbf {D}\)) were simulated with genotype coefficient matrices that followed the natural and orthogonal interactions (NOIA) parameterization, as in [20]. With NOIA, the coefficient matrices are constructed such that the genetic effects (\(\mathbf {A}\) and \(\mathbf {D}\)) are statistically orthogonal, even in the absence of HWE. However, the epistatic values were simulated with epistatic coefficient matrices that followed one of three biological models for epistasis (Fig 2). The resulting epistatic values are not orthogonal to \(\mathbf {A}\) and \(\mathbf {D}\), which means that \(\mathbf {A}\) and \(\mathbf {D}\) change when epistasis is simulated. Thus, we begin by explaining the simulation of epistatic deviations and subsequently discuss how \(\mathbf {A}\) and \(\mathbf {D}\) were computed.
The first step was to compute epistatic values for all nine possible combinations of genotypes at loci k and l as \(\mathbf {c}_{kl}=\mathbf {t}\epsilon _{kl}\), where \(\epsilon _{kl}\) is the epistatic effect between loci k and l, and \(\mathbf {t}\) is a vector containing 9 (\(3\times 3\)) epistatic coefficients, following one of three epistasis models (Fig. 2). The coefficients in \(\mathbf {t}\) were ordered from top-to-bottom and left-to-right (AABB, AaBB, aaBB, ..., aabb). Then, using the NOIA parameterization and the two-locus genotype frequencies, epistatic values were partitioned into nine statistically orthogonal effects following the procedure described in [20]:
This procedure was repeated for all \(m(m-1)/2\) pairwise interactions between QTL.
The epistatic deviation of individual i was computed as:
where \(h^k_{a,i}\) (\(h^l_{a,i}\)) is the additive genotype coefficient of individual i at locus k (l), and \(h^k_{d,i}\) (\(h^l_{d,i}\)) is the dominance genotype coefficient of individual i at locus k (l). Elements of the additive genotype coefficients, \(h^k_{a,i}\), were encoded as in (7), where \(p_{AA}\), \(p_{Aa}\), and \(p_{aa}\) are the genotype frequencies of marker k in the base generation (generation 14). Elements of the dominance genotype coefficients were encoded as in (8). The breeding value of individual i was computed as:
where \(\alpha ^k\) is the average effect of locus k, which was computed as:
where \(p^k\) is the allele frequency of locus k in generation 14. The dominance deviation of individual i was computed as:
where \({d^k}'\) was computed as:
Total genetic values were computed as \(\mathbf {TGV} = \mathbf {BV} + \mathbf {D} + \mathbf {E}\). Phenotypes were computed as \(\mathbf {y}= \mathbf {TGV} + \mathbf {e}\), where \(\mathbf {e}\) is a vector of random residuals, sampled from a normal distribution with mean zero and variance \(\sigma _e^2=\sigma _{TGV}^2\), such that the broad sense heritability \(H^2\) is equal to \(50\%\).
Scenarios
As a base scenario, a purely additive trait with 300 QTL was simulated (Base). We varied the number of QTL to be 1000 (\(S_{1000}\), 100 (\(S_{100}\)), or 10 \((S_{10}\)). Hereafter, we will call this characteristic of the trait “Sparsity”. Dominance was varied by sampling dominance factors \(\delta\) from \(\mathcal {N}\left( 0.6,0.3^2\right)\) with the \(D_{\text {medium}}\) scenario, or from \(\mathcal {N}(1.2,0.3^2)\) with the \(D_{\text {extreme}}\) scenario, which is extreme overdominance.
Following [1, 34], epistasis was varied by applying the additive \(\times\) additive model (\(E_{A}\)), complementary model (\(E_{C}\)), or interaction model (\(E_{I}\)). The relative variance components in the simulated scenarios are listed in Table 1. The location and additive effects of QTL in each scenario were not resampled for the dominance and epistasis scenarios, so they were the same as in the base scenario.
Comparison of methods
To evaluate the performance of the different methods, each one was trained on the 500 animals in the training population, and the accuracy was obtained by taking the Pearson correlation coefficient between predictions and the total genotypic values of the 2000 animals in the validation population.
Direct comparison of the average accuracies per scenario (Table 2) required many replicates, because the accuracies fluctuated considerably between replicates. Therefore, instead of comparing the average accuracies of the methods, we used the mean and standard error of the difference in accuracy between methods, \(\rho _{\text {NetSparse}}-\rho _{\text {Method}}\), which fluctuated much less (Table 3). In addition, we calculated the p-values corresponding to the one-sided paired t-test for the null hypotheses \({\mathcal {H}}_0:\mathbb {E}\left( \rho _{\text {NetSparse}}-\rho _{\text {Method}}\right) =0\) for each reference method. Significance of p-values with respect to the treshold of 0.05 were corrected for multiple testing via the Benjamini-Hochberg procedure (Table 4).
Results
First, we considered the effect of sparsity on the prediction of genotypic values in the additive scenarios for all methods (Fig. 3). In the sparse scenario with 10 QTL (\(S_{10}\)), the accuracy with Netsparse was about 0.28 higher than with GBLUP(-AD,-ADAA), and about 0.08 higher than with BayesB(-AD). In the scenario with 100 QTL (\(S_{100}\)), NetSparse had an increase in accuracy of 0.06 over the GBLUP(-AD,-ADAA) methods, of 0.02 over BayesB and 0.05 over BayesB-AD. In the “Base” scenario with 300 QTL, NetSparse was better than the methods that fit dominance, but not significantly better than the additive methods In the 1000 QTL scenario NetSparse was significantly better than BayesB and BayesB-AD, but not significantly better than the methods based on GBLUP.
Sparsity The accuracy of NetSparse versus other methods in scenarios with 10, 100, 300 and 1000 QTL. Each row corresponds to a different amount of sparsity, the different columns correspond to different methods, GBLUP and Bayes-B are additive methods, GBLUP-AD and BayesB-AD are methods with additive and dominance features and GBLUP-ADAA has additive, dominance and additive\(\times\)additive features. The line \(x=y\) is added in red for reference. A marker that is above the line means a replicate with higher accuracy for NetSparse than the method it is compared to, and a marker that is below the line means a replicate with lower accuracy for NetSparse than the other method
Now, we consider the simplest possible phenotypic model after the additive one, the dominance model. In the medium dominance scenario (\(D_{\text {medium}}\)), all methods performed roughly the same (Fig. 4). Hence, methods that tried to fit dominance did not result in higher accuracies than methods that did not. In the extreme dominance scenario (\(D_{\text {extreme}}\)), GBLUP-AD, BayesB-AD and GBLUP-ADAA methods had better performance than the other methods, which matched our prior expectation.
Dominance Accuracy of NetSparse versus other methods for the base scenario, and the two (Medium and Extreme) dominance scenarios. The line \(x=y\) is added in red for reference. A marker above the line means a replicate with higher accuracy for NetSparse than the method it is compared to, a marker below the line means a lower accuracy of NetSparse than the other method
The epistatic scenarios (Fig. 5) contain components which can be fitted only by NetSparse and GBLUP-ADAA, thus we expected that in the additive \(\times\) additive scenario, GBLUP-ADAA would have the best fit and that NetSparse would have the best fit among the other two scenarios. In the additive \(\times\) additive scenario (\(E_A\)), NetSparse had a significantly higher accuracy than the other methods except BayesB. Surprisingly, GBLUP-ADAA did not fit this scenario better than the other methods. In the complementary (\(E_C\)) scenario, NetSparse had 0.6 to 1.5 percentage points higher accuracy on average than the other methods, but these results were not consistent across replicates. The accuracy of NetSparse in the interaction scenario (\(E_I\)) was on average three or more standard errors above the other methods.
Epistasis Accuracy of NetSparse versus other methods for the three epistatic scenarios: Additive \(\times\) Additive, Complementary and Interaction. The line \(x=y\) is added in red for reference. A marker above the line means a replicate with higher accuracy for NetSparse than the method it is compared to, a marker below the line means a lower accuracy of NetSparse than the other method
Discussion
In this study, we compared methods that differed in flexibility. For example, the GBLUP-AD method is more flexible than the GBLUP method, because it allows for dominance effects to be fitted. In theory, the more flexible method should be able to give the same predictions as simpler methods by setting the additional hyperparameters to zero. In reality, however, these additional hyperparameters also have to be estimated from the data because the genetic architecture of the trait is unknown. In this study, we used the default prior distributions from the BGLR package and estimated hyperparameters from the training dataset. We chose not to fine-tune the prior distributions on test performance, because in reality this is not possible. As a result, when the actual genetic architecture of a trait is simple (e.g. additive and not sparse), a more flexible method will perform worse than a simpler method. Our results indeed showed that sometimes more flexible methods performed worse than simpler methods. For example, if we consider the scenario with complementary epistatic effects and consider the GBLUP and BayesB methods, BayesB with hyperparameter \(\pi\) set to zero is equivalent to GBLUP, but when fitting the value of \(\pi\) in BayesB to the data, a non-zero value of \(\pi\) is estimated, which in this scenario gives worse test performance than \(\pi =0\). In [5], it also was seen that, in certain cases, GBLUP can have higher accuracy than BayesC, which is a sparse method similar to BayesB.
The particular observation that NetSparse has higher accuracy than BayesB for the \(S_{10}\) and \(S_{100}\) scenarios was unexpected because BayesB is a sparse additive method, while NetSparse is a sparse non-additive method. Since the underlying data generating process is sparse additive, the expectation is that BayesB matches the simulated data better than NetSparse. The difference in method between NetSparse and BayesB is that NetSparse includes non-additivity and that NetSparse and BayesB use different priors for the variances. Therefore, we also made a comparison with LinSparse (Fig. 6), which is NetSparse without non-additive effects. The accuracy obtained with LinSparse for these scenarios was higher than for BayesB, which strongly suggests that the difference in accuracy between them originated from the different prior distributions for the variances.
Sparsity The accuracy of NetSparse versus BayesB and LinSparse on sparse scenarios with 10 or 100 QTL. The line \(x=y\) is added in red for reference. A marker above the line means a replicate with higher accuracy for NetSparse than the method it is compared to, a marker below the line means a lower accuracy of NetSparse than the other model
In BayesB, the prior distributions for the variances are scaled inverse chi-squared distributions, which are conjugate priors for the likelihood function, which makes Gibbs sampling possible. The NUTS sampler in PyMC3 does not require conjugate priors and, following the suggestions of [35], we chose half-normal distributions for the standard deviations. The main difference between the scaled inverse chi-squared and half-normal distributions is that the half-normal distribution decays faster than exponentially for large values, which gives much lighter tails than the scaled inverse chi-squared distribution, which decays only polynomially.
The epistatic method GBLUP-ADAA did not seem to give better fits than methods that did not fit epistasis. We think this is due to a lack of data for estimating epistatic effects accurately. Inaccurate estimates of these effects will not improve predictive ability.
Given that neural networks are able to fit more than additive relations, we expected that NetSparse would also be able to fit dominance and epistatic effects. This expectation was confirmed in scenarios with strong dominance effects and scenarios with epistatic effects because accuracy of NetSparse was higher than the accuracy of additive models. However, NetSparse may be at a disadvantage with traits that have negligible non-additive effects. Nevertheless this indicates a potential for Sparse Bayesian Neural Networks for improving phenotype prediction.
The main limitation of NetSparse is running time. On our hardware, training NetSparse took around 4 h per scenario with 500 animals and 5000 SNPs. The running time of NetSparse scales approximately linearly with both the number of animals and the number of SNPs, and can therefore become prohibitive when applied to larger datasets. The other methods have running times that were less than 2 min on these datasets, making them much more feasible for use on larger datasets. Considering the promising results of NetSparse, further studies could try to increase the computational performance so that larger datasets can be analyzed. As sampling of independent MCMC chains can be done in parallel on different machines, additional computational resources can speed up the wall time of sampling by a factor equal to the number of chains used. The MCMC sampling could also be replaced by variational inference, where the real posterior is approximated by a simpler variational posterior from which samples can be drawn directly, which would help NetSparse scale to larger datasets. The discrete variable \(\mathbf {s}\) could be handled, for instance, using the Concrete Distribution. [36]
Conclusions
This study shows that in nearly all scenarios the accuracy of NetSparse is not significantly lower than that of all other methods investigated. In particular, the NetSparse method performed as well or better than GBLUP and BayesB for all scenarios evaluated. On data generated from a sparse QTL simulation model, accuracies obtained with NetSparse were significantly higher than accuracies obtained with all the other methods investigated. In the medium dominance scenarios, accuracy obtained with NetSparse was 0.0 to 0.8 percentage points higher than that with the other methods investigated. In the extreme dominance scenario, accuracy obtained with Netsparse was 0.6 percentage points higher than that with other methods that did not explicitly model dominance. For methods that did explicitly model dominance, the accuracy was 5.8 to 6.3 percentage points lower for NetSparse. In the epistatic scenarios, accuracy obtained with NetSparse was 0.6 to 3.9 percentage points higher than that with the other methods. However, running time can be limiting, as NetSparse inference took about 200 times as long as the other methods.
Availability of data and materials
Not applicable.
Notes
- 1.
Strictly speaking, in GBLUP the allele effects of SNP-BLUP are integrated out, so the variance components in GBLUP should be called “parameters”. But for consistency with the other methods, we refer to the variance components as hyperparameters.
- 2.
Because \(\mathbf {s}\) is discrete it is summed instead of integrated over.
- 3.
As activation functions were applied to layers, one in hidden and one in output layer, this architecture is also called a two-layer neural network.
- 4.
A complete list of prior distributions is given in appendix A.
References
- 1.
Hill WG, Goddard ME, Visscher PM. Data and theory point to mainly additive genetic variance for complex traits. PLoS Genet. 2008;4:e1000008.
- 2.
Meuwissen THE, Hayes BJ, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001;157:1819–29.
- 3.
Habier D, Fernando RL, Kizilkaya K, Garrick DJ. Extension of the Bayesian alphabet for genomic selection. BMC Bioinformatics. 2011;12:186.
- 4.
Wolc A, Arango J, Settar P, Fulton JE, O’Sullivan NP, Dekkers JCM, et al. Mixture models detect large effect QTL better than GBLUP and result in more accurate and persistent predictions. J Anim Sci Biotechnol. 2016;7:7.
- 5..
- 6.
Daetwyler HD, Pong-Wong R, Villanueva B, Woolliams JA. The impact of genetic architecture on genome-wide evaluation methods. Genetics. 2010;185:1021–31.
- 7.
Cockerham CC. An extension of the concept of partitioning hereditary variance for analysis of covariances among relatives when epistasis is present. Genetics. 1954;39:859–82.
- 8.
Gianola D, Fernando RL, Stella A. Genomic-assisted prediction of genetic value with semiparametric procedures. Genetics. 2006;173:1761–76.
- 9.
Gianola D, Okut H, Weigel KA, Rosa GJ. Predicting complex quantitative traits with Bayesian neural networks: a case study with Jersey cows and wheat. BMC Genet. 2011;12:87.
- 10.
Okut H, Wu XL, Rosa GJ, Bauck S, Woodward BW, Schnabel RD, et al. Predicting expected progeny difference for marbling score in Angus cattle using artificial neural networks and Bayesian regression models. Genet Sel Evol. 2013;45:34.
- 11.
Waldman P. Approximate Bayesian neural networks in genomic prediction. Genet Sel Evol. 2018;50:70.
- 12.
Okut H, Gianola D, Rosa GJM, Weigel KA. Prediction of body mass index in mice using dense molecular markers and a regularized neural network. Genet Res. 2011;93:189–201.
- 13.
Wang Y, Mi X, Rosa G, Chen Z, Lin P, Wang S, et al. Technical note: an R package for fitting sparse neural networks with application in animal breeding. J Anim Sci. 2018;96:2016–26.
- 14.
Scardapane S, Comminiello D, Hussain A, Uncini A. Group sparse regularization for deep neural networks. Neurocomputing. 2017;241:81–9.
- 15.
Louizos C, Welling M, P Kingma D. Learning sparse neural networks through \(L_0\) regularization; 2018. arXiv:1712.01312.
- 16.
Patterson HD, Thompson R. Recovery of inter-block information when block sizes are unequal. Biometrika. 1971;58:545–54.
- 17.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
- 18.
Bishop CM. Pattern recognition and machine learning (Information Science and Statistics). Berlin: Springer-Verlag; 2006.
- 19.
Lee SH, van der Werf JHJ. MTG2: an efficient algorithm for multivariate linear mixed model analysis based on genomic information. Bioinformatics. 2016;32:1420–2.
- 20.
Vitezica ZG, Legarra A, Toro MA, Varona L. Orthogonal estimates of variances for additive, dominance, and epistatic effects in populations. Genetics. 2017;206:1297–307.
- 21.
Wittenburg D, Melzer N, Reinsch N. Including non-additive genetic effects in Bayesian methods for the prediction of genetic values based on genome-wide markers. BMC Genet. 2011;12:74.
- 22.
Technow F, Riedelsheimer C, Schrag TA, Melchinger AE. Genomic prediction of hybrid performance in maize with models incorporating dominance and population specific marker effects. Theor Appl Genet. 2012;125:1181–94.
- 23.
Pérez P, de los Campos G. Genome-wide regression and prediction with the BGLR statistical package. Genetics. 2014;198:483–95.
- 24.
Salvatier J, Wiecki TV, Fonnesbeck C. Probabilistic programming in Python using PyMC3. PeerJ Comput Sci. 2016;2:e55.
- 25.
Duane S, Kennedy AD, Pendleton BJ, Roweth D. Hybrid Monte Carlo. Phys Lett B. 1987;195:216–22.
- 26.
Neal RM. MCMC using Hamiltonian dynamics. In: Brooks S, Gelman A, Jones GL, Meng XL, editors. Handbook of Markov Chain Monte Carlo, vol. 54. Boca Raton: Chapman & Hall/CRC; 2010. p. 113–62.
- 27.
Hoffman MD, Gelman A. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J Mach Learn Res. 2014;15:1593–623.
- 28.
Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE T Pattern Anal. 1984;PAMI6:721–41.
- 29.
Beskos A, Pillai N, Roberts G, Sanz-Serna JM, Stuart A. Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli. 2013;19:1501–34.
- 30.
Theano Development Team. Theano: a Python framework for fast computation of mathematical expressions. 2016; arXiv:1605.02688.
- 31.
Sargolzaei M, Schenkel FS. QMSim. Bioinformatics. 2009;25:680–1.
- 32.
Wellmann R, Bennewitz J. The contribution of dominance to the understanding of quantitative genetic variation. Genet Res. 2011;93:139–54.
- 33.
Wellmann R, Bennewitz J. Bayesian models with dominance effects for genomic evaluation of quantitative traits. Genet Res. 2012;94:21–37.
- 34.
Fuerst C, James JW, Sölkner J, Essl A. Impact of dominance and epistasis on the genetic make-up of simulated populations under selection: a model development. J Anim Breed Genet. 1997;114:163–75.
- 35.
Stan Development Team. Stan modeling language user’s guide and reference manual. Version 2.18.0; 2018..
- 36.
Maddison CJ, Mnih A, Teh YW. The concrete distribution: a continuous relaxation of discrete random variables; 2016. arXiv:1611.00712.
Acknowledgements
Not applicable.
Funding
This research is supported by the Netherlands Organisation of Scientific Research (NWO) and the Breed4Food consortium partners Cobb Europe, CRV, Hendrix Genetics, and Topigs Norsvin. A: Prior distributions
Appendix A: Prior distributions
The prior distributions for NetSparse are:
The prior distributions for LinSparse (\(f(\mathbf {x};\mathbf {u})=\mathbf {w}^{\intercal }\left( \mathbf {x}\odot \mathbf {s}\right) + b\)) are:
The prior distributions of \(b^h_i\) each have a different mean because equal priors would make the model invariant under a relabeling of the hidden units and result in a degenerate geometry of the sampling space where each \(\mathbf {u}\) is equivalent to at least \(H!-1\) other configurations. For 20 hidden units, there are over \(10^{18}\) configurations, which makes it completely infeasible to explore the entire parameter space.
We could analytically marginalize out \(\sigma _o\) and \(\sigma _h\), resulting in a probability density over \(\mathbf {w}\) in terms of the modified Bessel function of the second kind:
and similarly for \(\mathbf {W}\). For simplicity, we kept the model parameterization in terms of \(\sigma _h\) and \(\sigma Bergen, G.H.H., Duenk, P., Albers, C.A. et al. Bayesian neural networks with variable selection for prediction of genotypic values. Genet Sel Evol 52, 26 (2020).
Received:
Accepted:
Published:
DOI: |
---
abstract: |
Analysis of the nonlinear Schr$\ddot{\hbox{o}}$dinger vortex reconnection is given in terms of coordinate-time power series. The lowest order terms in these series correspond to a solution of the linear Schr$\ddot{\hbox{o}}$dinger equation and provide several interesting properties of the reconnection process, in particular the non-singular character of reconnections, the anti-parallel configuration of vortex filaments and a square-root law of approach just before/after reconnections. The complete infinite power series represents a fully nonlinear analytic solution in a finite volume which includes the reconnection point, and is valid for finite time provided the initial condition is an analytic function. These series solutions are free from the periodicity artifacts and discretization error of the direct computational approaches and they are easy to analyze using a computer algebra program.
PACS number: 67.40.Vs
address: 'Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK'
author:
- Sergey Nazarenko and Robert West
title: 'Analytical solution for nonlinear Schr$\ddot{\hbox{o}}$dinger vortex reconnection'
---
INTRODUCTION
============
Vortex solutions of the nonlinear Schr$\ddot {\hbox{o}}$dinger (NLS) equation are of interest in nonlinear optics and in the theory of Bose-Einstein condensates (BEC). The NLS equation is also often used to describe turbulence in superfluid helium. [@gp] NLS is a nice model in this case because the vortex quantization appears naturally in this model and because its large-scale limit is the compressible Euler equation describing classical inviscid fluids. At short scales, the NLS equation allows for a “quantum uncertainty principle” which allows vortex reconnections without the need for a finite viscosity or other dissipation. Numerically, NLS vortex reconnection was studied by Koplik and Levine[@Koplik] and, more recently, by Leadbeater [*et al.*]{}[@Leadbeater] and, for a non-local version of NLS equation, by Berloff [*et al.*]{}[@Berloff] In applications to superfluid turbulence, the NLS equation was directly computed by Nore [*et al.*]{}[@Nore] Such cryogenic turbulence consists of repeatedly reconnecting vortex tangles, with each reconnection event resulting in the generation of Kelvin waves on the vortex cores [@Svistunov] and a sound emission. [@Leadbeater] These two small-scale processes are very hard to correctly compute in direct simulations of 3D NLS turbulence due to numerical resolution problems. A popular way to avoid this problem is to compute vortex tangles by a Biot-Savart method (derived from the Euler equation) and use a simple rule to reconnect vortex filaments that are closer than some critical (“quantum”) distance. This approach was pioneered by Schwarz [@Schwarz] and it has been further developed by Samuels [*et al.*]{}[@Barenghi] In this case, it is important to prescribe realistic vortex reconnection rules. Therefore, elementary vortex reconnection events have to be carefully studied and parameterized. Numerically, such a study was performed by Leadbeater [*et al.*]{}, [@Leadbeater] the present paper is devoted to the analytical study of these NLS vortex reconnection events.
The analytical approach of this paper is based on expanding a solution in powers of small distance from the reconnection point, and small time measured from the reconnection moment. The idea is to exploit the fact that when vortex filaments are near reconnection, the nonlinearity in the NLS equation is small. This smallness of the nonlinearity just stems from the definition of vortices in NLS (curves where $\Psi=0$) and the continuity of $\Psi$. Their core size is of the order of the distance over which $\Psi\rightarrow 1$ (where $\Psi=1$ represents the background condensate). Therefore, for vortices near reconnection, separated by a distance much smaller than their core size, $\Psi$ is small provided it is continuous. Thus, to the first approximation the solution near the reconnection point can be described by a linear solution which, already at this level, contains some very important information about the reconnection process: (1) that the reconnection proceeds smoothly without any singularity formation, (2) that in the immediate vicinity of the reconnection the vortices are strictly anti-parallel and (3) just before the reconnection event the distance between the vortices decreases as $|t|^{1/2}$, where $t$ is the time measured from the reconnection moment. Note that result (1) could surprise those who draw their intuition from vortex collapsing events in the Euler equation (which are believed to be singular). On the other hand, results (2) and (3) are remarkably similar to the numerical and theoretical results found for the Euler equation.
In section II of this paper we examine the local analysis of the reconnection process by deriving a linear solution and in section III consider its properties. The linear solution describes many, but not all the important properties of vortex reconnection. In particular, it cannot describe solutions outside the vortex cores and, therefore, it cannot describe the far-field sound radiation produced by the reconnection. On the other hand, one can substitute the linear solution back into the NLS equation and find the first nonlinear correction to this solution. Recursively repeating this procedure, one can recover the fully nonlinear solution in terms of infinite coordinate and time series. This derivation is discussed in detail in section IV. The series produced are a general solution to a Cauchy initial value problem. Thus, by Cauchy-Kowalevski theorem, [@Cauchy] these series define an analytic function (with a finite convergence radius) provided the initial conditions are analytic. The generation of such a suitable initial condition is addressed in section V. Our series representation of the solution to the NLS equation is exact, and therefore will include such properties as sound emission. However, due to the finite radius of convergence of the analytic solution, one is unable to observe a far-field sound emission directly. In this paper, we use [*Mathematica*]{} to compute some examples of the fully nonlinear solutions for the vortex reconnection. The results of which are presented in section VI.
Let us summarise the advantages and disadvantages that our analytical solution has with respect to those being computed via direct numerical simulations (DNS). Firstly, our analytical solutions are obtained as a general formula, simultaneously applicable for a broad class of initial vortex positions and orientations. Secondly, our analytical solutions are not affected by any periodicity artifacts (which are typical in DNS using spectral methods) or by discretization errors. On the other hand, our analytical solutions are only available for a finite distance from the vortex lines (of the order of the vortex core size) because their defining power series have a finite radius of convergence.
LOCAL ANALYSIS OF THE RECONNECTION
==================================
Let us start with the defocusing NLS equation written in the non-dimensional form, $$\label{NLS}
i\Psi_t+\Delta \Psi +(1-|\Psi|^2) \Psi =0.$$ Suppose that in vicinity of the point ${\bf r}=(x,y,z)=(0,0,0)$ at $t=t_0$ we have $\Psi = \Psi_0$ such that $Re \Psi_0 = z$, and $Im \Psi_0 = az+bx^2-cy^2$, where $a,b$ and $c$ are some positive constants. For such initial conditions the geometrical location of the vortex filaments, $\Psi=0$, is given by two intersecting straight lines, $z=0$ and $y=\pm\sqrt{b/c}\, x$. In the small vicinity of the point ${\bf r} =0$, deep inside the vortex core (where $\Psi_0 \approx 0$), we can ignore the nonlinear term found in equation (\[NLS\]). Further, by a simple transformation $\Psi = \Phi e^{it}$ we can eliminate the third term $\Psi$ and obtain $i\Phi_t+\Delta \Phi = 0$. (This just corresponds to multiplying our solution by a phase, it does not alter its properties, but does make the following analysis simpler). It is easy to see that the initial condition has not changed under this transformation, $\Psi_{0} = \Phi_{0}$. Advancing our system a small distance in time $t-t_0$, we find $Re\,\Phi = Re\,\Psi_0 - (t-t_0) \,\Delta Im\,\Psi_0$ and $Im\,\Phi = Im\,\Psi_0 + (t-t_0) \,\Delta Re\,\Psi_0$, or $$\begin{aligned}
\label{linear}
Re\,\Phi &=& z-2(b-c)\, (t-t_0),\\
Im\,\Phi &=& az+bx^2-cy^2. \nonumber\end{aligned}$$ For both $t-t_0<0$ and $t-t_0>0$ the set of vortex lines, $\Phi=0$, is given by two hyperbolas. A bifurcation happens at $t=t_0$ where these hyperbolas degenerate into the two intersecting lines (see Fig. \[contour\]). This bifurcation corresponds to the reconnection of the vortex filaments. Thus, we have constructed a local (in space and time) NLS solution corresponding to vortex reconnection. Obviously, this solution corresponds to a smooth function $\Phi$ at the reconnection point. It should be stressed that this is not an assumption, but just the way in which we have chosen to construct our solution. However, we do believe that this observed smoothness is a common feature of NLS vortex reconnection events. If this is true then all such reconnecting vortices could locally be described by the presented solution as the intersection of a hyperbola with a moving plane provides a generic local bifurcation describing a reconnection in the case of smooth fields.
![Linear solution Eq. (\[linear\]) of the nonlinear Schr$\ddot{\hbox{o}}$dinger equation for $a=1$, $b=3$, $c=2$ and $t_0=0$. Sub-figures (a), (b) and (c) show the intersection of the real (plane) and imaginary (hyperbolic paraboloid) parts of Eq. (\[linear\]) at successive times , and respectively. Sub-figures (d), (e) and (f) show the corresponding lines of intersection where ; reconnection occurs at .[]{data-label="contour"}](fig1c.eps){width="4.5in"}
PROPERTIES OF THE VORTEX RECONNECTION
=====================================
The local linear solution we have constructed (\[linear\]) reveals several important properties of the reconnection of NLS vortices. 1. Whatever the initial orientation of the vortex filaments, the reconnecting parts of these filaments align so that they approach each other in an anti-parallel configuration. Indeed, according to (\[linear\]), the fluid velocity field $
\vec{v} = \nabla \arctan ( [Im\,\Phi]/[Re\,\Phi] )
= \nabla \arctan ( [az + bx^2 -cy^2]/[z-2(b-c)(t-t_0)] )
$. At the mid-point between the two vortices one finds a velocity field consistent with an anti-parallel pair, $
\vec{v} = 1/[2a^2(c-b)(t-t_0)] \vec{e}_z \neq 0.
$ (For a parallel configuration one would find $\vec{v} = 0$). Amazingly similar anti-parallel configurations have been observed in the numerical Biot-Savart simulations of thin vortex filaments in inviscid incompressible fluids. 2. The reconnecting parts of the vortex filaments approach each other as $\sqrt{t-t_0}$. Indeed, setting $Re\,\Phi = Im\,\Phi = 0$ and $y=0$ in (\[linear\]) one obtains $
x = \pm \sqrt{2a[(c/b)-1](t-t_0)}.
$ Exactly the same scaling behaviour, for approaching thin filaments in incompressible fluids, has been given by the theory of Siggia and Pumir and has been observed numerically in Biot-Savart computations. 3. The nonlinearity plays a minor role in the late stages of vortex reconnection in NLS. This is a simple manifestation of the fact that in the close spatio-temporal vicinity of the reconnection point $\Psi \approx 0$, so that the dynamics are almost linear. This last property can be also reformulated as follows. No singularity is observed in the process of reconnection according to the solution (\[linear\]): both the real and imaginary parts of $\Psi$ behave continuously in space and time. This property is in drastic contrast to the singularity formation found in vortex collapsing events described by the Euler equation. Indeed, distinct from incompressible fluids, no viscous dissipation is needed for the NLS vortices to reconnect. Here, dispersion does the same job of breaking the topological constraints (related to Kelvin’s circulation theorem) as viscosity does in a normal fluid.
NONLINEAR SOLUTION
==================
We will now move on to consider the full NLS equation. We will use a recursion relationship to compute the solution assuming that and (for simplicity, we take $t_0=0$). The solution we obtain will therefore be of the form , where . The above $\epsilon$ scaling of $x$, $y$, $z$ and $t$ has been chosen to generate a recursion relationship when substituted in the NLS equation (\[NLS\]). Of course we could have chosen a different $\epsilon$ dependence, however, as the final series representation of our solution contains an infinite number of terms, this would just correspond to the same solution but with a suitable re-ordering.
Consider the NLS equation (\[NLS\]). Firstly, we note that and and therefore $i\Psi_{t}^{(m)} \sim\epsilon^{m-3}$, $\triangle\Psi^{(n)} \sim\epsilon^{n-2}$, $\left[ |\Psi|^{2}\Psi \right]^{(p)} \sim\epsilon^{p}$ and $\Psi^{(q)} \sim\epsilon^{q}$, where $
[|\Psi|^{2}\Psi]^{(p)}
= \sum_{i,j=1}^{p}\Psi^{*(i)} \Psi^{(j)} \Psi^{(p-i-j)}
$. Matching the terms, by setting $m=n+1$ and $p=q=n-2$, and integrating we find $$\label{t-rec}
\Psi^{(n+1)}= \Psi_{0}^{(n+1)}
+ i\int_{0}^{t} \left[\triangle\Psi^{(n)}
+\Psi^{(n-2)}-[|\Psi|^{2}\Psi]^{(n-2)}\right] dt,$$ where are arbitrary $n^{th}$ order functions of coordinate which appear as constants of integration with respect to time. The full nonlinear solution of the Cauchy initial value problem can now be obtained by matching $\Psi_{0}^{(n)}$ to the $n^{th}$ order components of the initial condition at $t=0$ obtained via a Taylor expansion in coordinate. Let us assume that the initial condition is an analytic function so that it can be represented by power series in coordinates with a non-zero volume of convergence. Then, by the Cauchy-Kowalevski theorem, the function $\Psi$ will remain analytic for non-zero time. In other words, the solution can also be represented as a power series with a non-zero domain of convergence in space and time. Remarkably, the recursion relation Eq. (\[t-rec\]) is precisely the means by which one can write down the terms of the power-series representation of the fully nonlinear solution to the NLS equation, with an arbitrary analytical initial condition $\Psi_{0}$.
INITIAL CONDITION
=================
Our next step is to construct a suitable initial condition for our study of reconnecting vortices. This initial condition will have to be formulated in terms of a power series. We start by formulating the famous line vortex solution to the steady state NLS equation [@gp] in terms of a power series. Substituting into Eq. (\[NLS\]), we find $ \triangle A - A|\nabla\theta|^{2} + A - A^{2}=0\nonumber$, where we have used the fact that and . We can simplify this equation, since $A=A(r)$ and therefore, . However, we also note that since and . Therefore, we have $\frac{1}{r}\partial_{r}(r\partial_r A)-\frac{A}{r^{2}}-A^{3}+A=0$. We will solve this equation using another recursive method. We would like to get a solution of the form $A= a_{0} + a_{1}r + a_{2}r^{2} + a_{3}r^{3} + \cdots = \sum_{n} A^{(n)}$. (However, we can set $a_{0}$ to zero on physical grounds, since we require $\Psi=0$ at $r=0$). As before $\frac{1}{r}\partial_{r}(rA_{r}^{(m)}) = m^{2}a_{m}r^{m-2}
\sim \epsilon^{m-2}$, $\frac{A^{(n)}}{r^{2}} = a_{n}r^{n-2}\sim \epsilon^{n-2}$, $[A^{3}]^{(p)} \sim \epsilon^{p}$ and $A^{(q)} = a_{q}r^{q}\sim \epsilon^{q}$, where $
[A^{3}]^{(p)}=\sum_{i,j=1}^{p} A^{(i)}A^{(j)}A^{(p-i-j)}.
$ Again, by matching powers of $r$ we can derive a recursion relationship for $a_{n}$. Setting $m=n$ and $p=q=n-2$ we obtain $$\label{hmm2}
a_{n} = (f_{n-2}-a_{n-2})/(n^{2}-1)
\nonumber,$$ where .
We should note that $a_{2n}=0$ for all $n$. Therefore, taking a power of r out of our expansion for $A(r)$ we find, $$\label{squidge}
\Psi = A(r)r^{i\theta} = rg(r)e^{i\theta}\nonumber,$$ where . Further, is complex so we can write and hence our prototype solution, for a vortex pointing along the $z$-axis, is $\Psi = (x+iy)g(x^{2}+y^{2})$. We can manipulate this prototype solution to get an initial condition for our vortex reconnection problem. Our initial condition $\Psi_0$ will be made up of two vortices, $\Psi_1$ and $\Psi_2$, a distance $2d$ and angle $2\alpha$ apart. Following the example of others, \[Koplik [*et al.*]{}, Ref. \] and \[Leadbeater [*et al.*]{}, Ref. \], we take the initial condition to be the product of $\Psi_1$ and $\Psi_2$, that is $\Psi_0=\Psi_1 \Psi_2\nonumber$. One could argue that such an initial condition is rather special, as two vortices found in close proximity would typically have already distorted one another in their initial approach. Nevertheless, such a configuration provides us with a valuable insight into the dynamics of NLS vortex reconnections.
Firstly, we would like the vortices in the $(x,y)$ plane. We can do this by transforming our coordinates $x\rightarrow y$, $y\rightarrow z$ and $z\rightarrow x$. This will give us a vortex pointing along the $x$-axis $\Psi = (y+iz)g(y^{2}+z^{2})\nonumber$. The vortex can now be rotated by angle $\alpha$ to the $x$-axis in the $(x,y)$ plane via $x \rightarrow x\cos\alpha -y\sin\alpha$ and $y \rightarrow y\cos\alpha +x\sin\alpha$. Finally, we shift the whole vortex in the $z$ direction by a distance $d$ using we finally obtain $\Psi_{1} = [y\cos\alpha+x\sin\alpha+i(z-d)]
g((y\cos\alpha+x\sin\alpha)^{2}+(z-d)^{2})$. In a similar manner, $\Psi_2$ is a vortex at angle $-\alpha$ and shifted by $-d$ in the $z$ direction, $\Psi_{2} = [y\cos\alpha-x\sin\alpha+i(z+d)]
g((y\cos\alpha-x\sin\alpha)^{2}+(z+d)^{2})$.
RESULTS
=======
![The initial condition for the prototype vortex solution is constructed via an appropriate expansion for , Eq. (\[hmm2\]). Here we can see the expansion for truncated at three different orders of ; (dash-dot line), (dashed line) and (solid line) with . At higher orders one would see the existence of a finite radius of convergence at .[]{data-label="Ar"}](fig2c.eps){height="2in"}
![Sub-figures (a) to (f) show the evolution of two initially separated vortices in time . This realization is for , , and . The reconnection and separation events are clearly evident.[]{data-label="pics"}](fig3c.eps){width="4.5in"}
It would time consuming to expand the analytical solution, derived in the previous section, by hand. Thankfully, we can use a computer to perform the necessary algebra, and to derive the hugh number of terms the recursive formulae generate. What follows is an example solution of the reconnection of two initially separated vortices.
Firstly, we need to consider the validity and accuracy of our initial condition. Fig. \[Ar\], shows the prototype solution $A=A(r)$ for a single vortex, at various different orders. Increasing the order will obviously improve accuracy. However, one should note that at higher order there is evidence of a finite radius of convergence $r_c$. This will restrict the spatial region of validity for our full t-dependent solution. Our prototype solution also has a dependence on $a_1$. In the following simulation we have chosen $a_1=0.6$ numerically so that the properties of $A(r)$ match that of a NLS vortex. It is evident that we cannot satisfy these properties completely (namely $\Psi\rightarrow 1$ as $r \rightarrow \infty$) as our power series diverges near $r_c$. Nevertheless, this does not present us with a problem if we restrict ourselves to considering the evolution of contours of $\Psi$, such as $|\Psi|< 1$, where $A(r)$ is realistically represented. Further, it should be noted that sound radiation could in principle be visualized in our solution by drawing contours of $|\Psi|$ close to unity. However, to have an accurate representation, we would need to take a very large number of terms in the series expansion, therefore the study of sound in our model is somewhat harder than the analysis of the vortices themselves. Of course the validity of the full $t$-dependent solution will be restricted, in the spatial sense, by the initial condition’s region of convergence. The region of convergence will evolve, remaining finite during a finite interval of time (by the Cauchy-Kowalevski theorem), but then may shrink to zero afterwards.
We will now discuss an example solution. As we only wish to demonstrate this method, we will not consider a high order solution in this paper. In our example simulation below, we used Mathematica to perform the necessary algebra in generating a nonlinear solution up to . (One should note that although the prototype solution (\[squidge\]) for a single vortex has $a_{2n}=0$ for all $n$, our initial condition is made up of two vortices, i.e. two series multiplied together. Therefore, there will be cross terms of order $O(\epsilon^{6})$ in our initial condition).
Our choice of parameters will be $d=0.6$ and $\alpha=\pi/4$. This corresponds to two vortices, initially separated by a distance 1.2, at right angles to each other. Fig. \[pics\] shows the evolution of the iso-surface $|\Psi|=0.1$ in time, demonstrating reconnection and then separation. Examining this solution in detail we can clearly see evidence of some of the properties mentioned earlier - that of a smooth reconnection (the absence of singularity) and the anti-parallel alignment of vortices prior to reconnection.
CONCLUSION
==========
In this paper we presented a local analysis of the NLS reconnection processes. We showed that many interesting properties of the reconnection can already be seen at the linear level of the solution. We derived a recursion formula Eq. (\[t-rec\]) that gives the fully nonlinear solution of the initial value problem in a finite volume around the reconnection point for a finite period of time. In fact, formula (\[t-rec\]) can describe a much wider class of problems. Of interest, for example, are solutions describing the creation or annihilation of NLS vortex rings. This process is easily described by considering vortex rings, at there creation/annihilation moment, as the the intersection of a plane with the minimum of a paraboloid. Further, this method of expansion around a reconnection point can be used for other evolution equations, e.g. the Ginzburg-Landau equation. These applications will be considered in future. We wish to thank Robert Indik, Nicholas Ercolani and Yuri Lvov for their many fruitful discussions.
[9]{}
N.N. Akhmediev, Opt. Quan. Elec. **30**, 535 (1998).
A.W. Snyder, L. Poladian and D.J. Mitchell, Opt. Lett. **17** (11) 789 (1992).
G.A. Swartzlander and C.T. Law, Phys. Rev. Lett. **69** (17), 2503 (1992).
N.G. Berloff and P.H. Roberts, J. Phys. A **32** (30), 5611 (1999).
V.L. Ginzburg and L.P. Pitaevskii, Sov. Phys. JETP **7**, 858 (1958).
K.W. Schwarz, Phys. Rev. B. **38** (4), 2398 (1988).
N. Ercolani and R. Montgomery, Phys. Lett. A. **180**, 402 (1993).
J. Koplik and H. Levine, Phys. Rev. Lett. **71** (9), 1375 (1993).
M. Leadbeater, T. Winiecki, D.C. Samuels, C.F. Barenghi and C.S. Adams, Phys. Rev. Lett. **86** (8), 1410 (2001).
C. Nore, M. Abid and M.E. Brachet Phys. Rev. Lett **78** (20), 3896 (1997).
B.V. Svistunov, Phys. Rev. B **52** (5), 3647 (1995).
C.F. Barenghi, D.C. Samuels, G.H. Bauer and R.J. Donnelly, Phys. Fluids **9** (9), 2631 (1997).
A. Pumir and E. Siggia, Phys. Fluids A **2** (2), 220 (1990).
A. Pumir and E. Siggia, Physica D **37**, 539 (1989).
A. Pumir and E. Siggia, Phys. Rev. Lett. **55** (17), 1749 (1985).
R. Courant and D. Hilbert, *Methods of mathematical physics: partial differential equations 2*, (Interscience, London, 1965).
|
Q:
Xamarin forms: Get the next and previous dates with a selected date
I am using the following codes for getting the next and previous day details with a selected day. I have 2 buttons named next and previous for getting the next previous dates.
//Saving the current date
string selectedDate = DateTime.Now.ToString("dd-MM-yyyy");
//Previous day
public void PrevButtonClicked(object sender, EventArgs args)
{
DateTimeOffset dtOffset;
if (DateTimeOffset.TryParse(selectedDate, null, DateTimeStyles.None, out dtOffset))
{
DateTime myDate = dtOffset.DateTime;
selectedDate = myDate.AddDays(-1).ToString("dd-MM-yyyy");
}
}
//Next day
public void NextButtonClicked(object sender, EventArgs args)
{
DateTimeOffset dtOffset;
if (DateTimeOffset.TryParse(selectedDate, null, DateTimeStyles.None, out dtOffset))
{
DateTime myDate = dtOffset.DateTime;
selectedDate = myDate.AddDays(+1).ToString("dd-MM-yyyy");
}
}
If I click the previous button I get 03-04-2019 as the result. If again pressed the previous button I get 02-10-2019. Same for the next buttons. Based on the selected date, it will return the next or previous date.
This feature working perfectly in android and windows. But in ios getting the wrong result with this code. Is this the correct way of achieving this feature?
A:
You can improve your code . I create a sample with a Label to display the current date.
in xaml
<StackLayout VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand" Orientation="Horizontal">
<Button Text="Preview" Clicked="PrevButtonClicked"/>
<Label x:Name="dateLabel" TextColor="Red" WidthRequest="100"/>
<Button Text="Next" Clicked="NextButtonClicked"/>
</StackLayout>
in code Behind
public partial class MainPage : ContentPage
{
int year, month, day;
public MainPage()
{
InitializeComponent();
dateLabel.Text = DateTime.Now.ToString("dd-MM-yyyy");
year = DateTime.Now.Year;
month = DateTime.Now.Month;
day= DateTime.Now.Day;
}
private void Button_Clicked(object sender, EventArgs e)
{
DateTime nowDate = new DateTime(year, month, day);
var previewDate = nowDate.AddDays(-1);
dateLabel.Text = previewDate.ToString("dd-MM-yyyy");
year = previewDate.Year;
month = previewDate.Month;
day = previewDate.Day;
}
private void Button_Clicked_1(object sender, EventArgs e)
{
DateTime nowDate = new DateTime(year, month, day);
var nextDate = nowDate.AddDays(+1);
dateLabel.Text = nextDate.ToString("dd-MM-yyyy");
year = nextDate.Year;
month = nextDate.Month;
day = nextDate.Day;
}
}
|
Prediction of Terpenoid Toxicity Based on a Quantitative Structure-Activity Relationship Model.
Terpenoids, including monoterpenoids (C10), norisoprenoids (C13), and sesquiterpenoids (C15), constitute a large group of plant-derived naturally occurring secondary metabolites with highly diverse chemical structures. A quantitative structure-activity relationship (QSAR) model to predict terpenoid toxicity and to evaluate the influence of their chemical structures was developed in this study by assessing in real time the toxicity of 27 terpenoid standards using the Gram-negative bioluminescent Vibrio fischeri. Under the test conditions, at a concentration of 1 µM, the terpenoids showed a toxicity level lower than 5%, with the exception of geraniol, citral, (S)-citronellal, geranic acid, (±)-α-terpinyl acetate, and geranyl acetone. Moreover, the standards tested displayed a toxicity level higher than 30% at concentrations of 50-100 µM, with the exception of (+)-valencene, eucalyptol, (+)-borneol, guaiazulene, β-caryophellene, and linalool oxide. Regarding the functional group, terpenoid toxicity was observed in the following order: alcohol > aldehyde ~ ketone > ester > hydrocarbons. The CODESSA software was employed to develop QSAR models based on the correlation of terpenoid toxicity and a pool of descriptors related to each chemical structure. The QSAR models, based on t-test values, showed that terpenoid toxicity was mainly attributed to geometric (e.g., asphericity) and electronic (e.g., maximum partial charge for a carbon (C) atom (Zefirov's partial charge (PC)) descriptors. Statistically, the most significant overall correlation was the four-parameter equation with a training coefficient and test coefficient correlation higher than 0.810 and 0.535, respectively, and a square coefficient of cross-validation (Q2) higher than 0.689. According to the obtained data, the QSAR models are suitable and rapid tools to predict terpenoid toxicity in a diversity of food products. |
The court said the four did not receive a fair hearing at their original trial in 1994 when they faced charges of collaborating with Kurdish rebels.
It also overturned the convictions of the four - who include award-winning rights activist Leyla Zana.
The release of the four last month was welcomed by the European Union and human rights groups.
Ms Zana, Orhan Dogan, Hatip Dicle and Selim Sadak were jailed for their alleged ties to the now-defunct Kurdistan Workers Party (PKK), which advocated a violent campaign for Kurdish self-rule.
Defence lawyer Hamit Geylani welcomed the appeal court ruling saying: "The political, anti-democratic verdict which fell foul of justice and law has been overturned."
No date has yet been set for the new trial.
Earlier this week, police pressed for new charges to be brought against the four for making separatist speeches at rallies in south-eastern Turkey last month.
They were also accused of speaking Kurdish at the rally, in violation of Turkish law.
Most restrictions on the use of the Kurdish language in Turkey have been lifted in recent years, but speeches in Kurdish are still forbidden under Turkish laws governing elections and political parties.
Symbolic
Ms Zana has become a symbol of both Kurdish resistance and Turkey's flawed judicial process since she was sent to jail.
She and her colleagues were retried in 2003, but observers said the retrial also suffered major procedural flaws.
The BBC's Jonny Dymond, in Istanbul, says there are hopes that the third trial the four now face will be different.
They will not be kept in prison during the trial and the strong presumption is that given the reforms to Turkey's courts over the past few years, they will have a better chance of presenting their case, he says.
The retrial has an important symbolic value as Turkey is keen to show the European Union that it is fit to start membership negotiations.
Our correspondent says an open trial which is demonstrably fair would go a long way towards assuaging European concerns about Turkey's treatment of both its Kurdish minority and also those who face its judicial system. |
Author Archive for J. Angelo Racoma
J. Angelo Racoma is the Editor in Chief of Blog Tutorials. As CMOTC of Splashpress Media (that means Chief Mover of the Cheese to the uninitiated), he acts in various capacities managing the creative, technical and administrative aspects of the network. He also serves as Assistant Editor of the Blog Herald, and blogs about technology and other stuff on his personal blog, the J Spot.
Single Big Blog vs. Numerous Niche Blogs? Or Both?Posted by J. Angelo as Blog Tutorial, Tips, Community Building, SEO Features, Editorials Over at Performancing, Ahmed Bilal posts about the argument between running one big blog about a wide array of topics, or numerous blogs that focus on specific niches. He presents the pros and cons to both. ...
Illacrimo Theme for WordPress ReleasedPosted by J. Angelo as Design and Themes, Design Bloggy Network has recently announced via LifeSpy the release of the Illacrimo theme for WordPress.
Comment (1)
Something Big is Coming Soon!Posted by J. Angelo as Blog Tutorials News There are going to be some shake ups here on Blog Tutorials these next few days. We're making some adjustments. Something big is coming soon, and I'm personally quite excited about it. We're refocusing our energies into converting most of our blogs to become more community-oriented. Somewhere along the way we got tired of niche blogging, and we thought of throwing the concept of the long tail out the window. Well, we're not exactly getting rid of the concept of niche blogging, but we discovered that if we spread ourselves too thin, then we run the risk of losing out |
Introduction {#Sec1}
============
Muscular dystrophies are a heterogeneous group of genetic disorders that cause progressive muscle weakness and wasting. New therapies for muscular dystrophies, and better assays to find these therapies, are critical unmet needs. An obstacle to developing new treatments for muscular dystrophies has been the lack of appropriate animal models that enable assays for efficient screening of new candidate drugs. Current methods of measuring drug activity involve biochemical analysis of muscle tissue, which is expensive, time consuming, and may require large numbers of animals to achieve the necessary statistical power. Development of new model systems that enable rapid in vivo detection of pharmacodynamic activity is essential to speed drug discovery.
Myotonic dystrophy (dystrophia myotonica; DM) is the most common muscular dystrophy in adults, affecting \~1 in 7500^[@CR1]^. The genetic cause of DM type 1 (DM1) is a CTG repeat expansion (CTG^exp^) in the 3′ untranslated region of the DM protein kinase (*DMPK*) gene. Expression of mutant *DMPK*-CUG^exp^ mRNA in muscle results in myotonia (delayed relaxation of muscle fiber contraction), histopathologic myopathy, and progressive muscle wasting. No current treatment alters the disease course. Symptoms in DM1 arise from a novel RNA-mediated disease mechanism that involves the inhibition of alternative splicing regulator proteins by mutant *DMPK*-CUG^exp^ transcripts, resulting in inappropriate expression of developmental splice isoforms in adult tissues^[@CR2]--[@CR4]^. In DM1 patients, RNA splice variants serve as biomarkers of disease activity^[@CR5]^, while in DM1 mice they also serve as sensitive indicators of therapeutic drug response^[@CR6]--[@CR8]^, which can be translated to clinical care.
The discovery and development of genetically encoded fluorescent proteins has enabled multicolor imaging of biological processes such as differential gene expression and protein localization in living cells^[@CR9],[@CR10]^. Genetic modifications of green fluorescent protein (GFP), derived from the jellyfish *Aequorea victoria*, and red fluorescent protein DsRed, derived from the coral *Discosoma species*, have improved the brightness and photostability of these proteins. A previous study capitalized on an unusual feature of the DsRed gene, which is that it has two open reading frames, to demonstrate that GFP and DsRed can be used together in a single bi-chromatic construct to quantify alternative splicing events within individual cells or mixed cell populations using flow cytometry or fluorescence microscopy^[@CR11]^.
Here we modify and optimize this bi-chromatic reporter for in vivo use, generate a novel therapy reporter (TR) transgenic mouse model of DM1 that expresses the optimized construct, design and custom-build an in vivo fluorescence spectroscopy system for rapid measurements of splicing outcomes in live TR mice, and test the pharmacodynamic activity of a novel ligand conjugated antisense (LICA) oligonucleotide that is designed to enhance drug uptake into target tissues as compared to the unconjugated parent ASO.
Results {#Sec2}
=======
Bi-chromatic alternative splicing therapy reporter {#Sec3}
--------------------------------------------------
In DM1, a hallmark of cellular dysfunction is inappropriate expression of developmental splice isoforms in adult tissues. The Human Skeletal Actin - Long Repeat (*HSA*^LR^) transgenic mouse model^[@CR3]^ was designed to test the hypothesis that expanded CUG repeat mRNA is toxic for muscle cells. In this model, human skeletal actin (*ACTA1*) transcripts that contain \~220 CUG repeats are expressed in muscle tissue, resulting in myotonia (delayed muscle relaxation due to repetitive action potentials) and histopathologic signs of myopathy^[@CR3]^. Muscle tissue of HSA^LR^ mice features mis-regulated alternative splicing that is highly concordant with human DM1 muscle tissue, including preferential exclusion of *Atp2a1* exon 22^[@CR2],[@CR5]^.
A previously published fluorescence bi-chromatic reporter construct^[@CR11]^ used a ubiquitous CMV promoter to mediate activity of the construct, and splicing of a chicken cardiac troponin T (*Tnn2*) exon 5 minigene to determine the reading frame. Only one of these reading frames encodes DsRed. By placing the DsRed cDNA adjacent to the GFP cDNA, two mutually exclusive reading frames resulted in production of either DsRed when exon 5 was included, or GFP when exon 5 was excluded. The GFP reading frame has a long *N*-terminal peptide encoded by the alternate open reading frame of DsRed, but fluorescence of GFP remained bright. To determine whether this construct is viable as an in vivo reporter of alternative splicing derangements in DM1 muscle tissue, we injected and electroporated plasmid DNA^[@CR12]^ that contains the construct into tibialis anterior (TA) muscles in HSA^LR^ mice and WT controls. In vivo DsRed/GFP fluorescence of injected muscles appeared similar in HSA^LR^ and WT mice, with exon 5 exclusion predominating in both (Supplementary Fig. [1](#MOESM1){ref-type="media"}).
To optimize the bi-chromatic construct for in vivo expression in skeletal muscle and enable detection of differential splicing in DM1 muscle, we modified the construct in two ways. First, we replaced the chicken *Tnnt2* minigene with a human *ATP2A1* exon 22 minigene. Of the dozens of transcripts mis-spliced in DM1 muscle, *ATP2A1* exon 22 was chosen for this reporter because it has the largest change in DM1 mouse muscle that we have observed, and is highly responsive to therapeutic ASOs, as measured by traditional RT-PCR analysis of muscle tissue samples^[@CR7],[@CR8]^. To bypass a stop codon and enable shift of the reading frame^[@CR13]^, exon 22 of the *ATP2A1* minigene was modified by site-directed mutagenesis to induce a single base pair deletion so that it contains 41 base pairs instead of 42. Second, to restrict expression to skeletal muscle, we replaced the CMV promoter with a human skeletal actin (HSA) promoter^[@CR14]^. In this system, inclusion of *APT2A1* minigene exon 22 (high in WT and ASO-treated DM1 mouse muscle) gives rise to a transcript that produces a red fluorescent protein (DsRed), while exclusion of exon 22 (high during muscle regeneration and in adult DM1 muscle) shifts the reading frame from DsRed to GFP, resulting in a transcript that produces green fluorescent protein (Fig. [1a](#Fig1){ref-type="fig"}). By design, measurement of the DsRed/GFP ratio by quantitative imaging enables determination of therapeutic response in DM1 mice. Intramuscular injection of HSA-therapy reporter (TR) plasmid DNA was associated with a significantly higher DsRed/GFP ratio by in vivo imaging in WT mice than in HSA^LR^ transgenic mice (Supplementary Fig. [2](#MOESM1){ref-type="media"}). RT-PCR analysis of skeletal muscle tissue RNA also identified a higher percent inclusion of exon 22 inclusion in WT muscle than *HSA*^LR^ muscle, confirming that the TR construct splice switch is active in vivo (Supplementary Fig. [2](#MOESM1){ref-type="media"}). In cryosections of injected muscles, DsRed expression was higher in WT than in HSA^LR^ muscle and localized to the cytoplasm, while GFP expression was higher in HSA^LR^ than in WT and, unexpectedly, localized to myonuclei instead of cytoplasm (Supplementary Fig. [2](#MOESM1){ref-type="media"}).Fig. 1Design and validation of the therapy reporter (TR) splicing construct. **a** TR construct design. **b** To test the splice switch under conditions of muscle regeneration, we induced acute injury by injecting 1.2% barium chloride^[@CR17]^ into the right gastrocnemius muscles of TR transgenic mice (*N* = 8) and monitored quantitative DsRed/GFP ratios by serial fluorescence microscopy of live mice. The left gastrocnemius was untreated. Shown are representative images of DsRed and GFP fluorescence on Days 7 and 21 after acute injury. Mice are prone. Intensity range = 0--4095 grayscale units. Bars = 4 mm. **c** Quantitative DsRed/GFP fluorescence ratios in untreated (black circles) and injured (blue triangles) gastrocnemius muscles by serial in vivo imaging after acute injury. Error bars indicate mean ± s.e.m. Non-linear regression. The imaging results are representative of four independent experiments. **d** Alternative splicing analysis by RT-PCR of transgene (Tg) and endogenous (Endo) *Atp2a1* exon 22 on 0, 5, 7, 12, and 21 days after injury (*N* = 2 mice each time point). WT wild-type control, L DNA ladder, bp base pairs. **e** Quantitation of splicing shown in **d**, displayed as % exon 22 inclusion. Error bars indicate mean ± s.e.m. Non-linear regression. **f** DsRed and GFP fluorescence in gastrocnemius muscle tissue sections 7 and 21 days after acute injury. Fluorescence images are the extended focus of deconvolved Z series. Fluorescence intensity range = 0--12,000 grayscale units. Alexa 647-wheat germ agglutinin (WGA; pseudocolored yellow) and DAPI (blue) highlight muscle fibers and nuclei, respectively. Merge DsRed (red) + GFP (green) + WGA + DAPI, H&E hematoxylin and eosin. Bars = 50 μm
Novel TR transgenic mouse model {#Sec4}
-------------------------------
Transgenic mice were generated by microinjection of a linear HSA-TR construct DNA (see Methods). Expression of the TR transgene was high by RT-PCR in all skeletal muscles examined, and low or absent in heart and liver, while exon 22 splicing pattern of the TR-*ATP2A1* transgene (Tg) appeared similar to the exon 22 splicing pattern of endogenous mouse *Atp2a1* in all muscles examined except the flexor digitorum brevis (Supplementary Fig. [3](#MOESM1){ref-type="media"}). Wild-type TA muscle injected with TR plasmid (TR-P) served as a control.
Fluorescence imaging of muscle regeneration {#Sec5}
-------------------------------------------
After muscle injury, muscle stem cells, termed satellite cells, proliferate and form muscle precursor cells that differentiate and eventually form new mature muscle fibers. During the differentiation and maturation process, splicing of several mRNAs transitions from developmental to fully mature adult isoforms^[@CR15]^. Alternative splicing of *ATP2A1* exon 22 is developmentally regulated, being preferentially excluded during muscle development or regeneration of adult muscle, and preferentially included in normal adult muscle^[@CR2],[@CR16]^. To determine whether exon 22 in the TR transgene *ATP2A1* also is developmentally regulated, and to monitor muscle regeneration, we injected the right gastrocnemius (gastroc) of homozygous TR transgenic mice (Supplementary Fig. [3](#MOESM1){ref-type="media"}; Supplementary Table [1](#MOESM1){ref-type="media"}) with barium chloride (BaCl~2~) to induce muscle injury^[@CR17]^, and measured fluorescence by serial in vivo imaging under general anesthesia. Quantitative DsRed/GFP measurements in treated muscles were lowest at Day 7 and returned to baseline by Day 21 (Fig. [1b, c](#Fig1){ref-type="fig"}; Supplementary Fig. [4](#MOESM1){ref-type="media"}; Supplementary Table [2](#MOESM1){ref-type="media"}). Splicing by RT-PCR revealed that the peak exon 22 exclusion of both transgene *ATP2A1* and mouse *Atp2a1* was Day 5 after injury, with a return to the baseline splicing pattern by Day 12, similar to a previous report of mouse *Atp2a1* splicing after muscle injury by cardiotoxin^[@CR16]^, and indicating that the splicing recovery, as measured by DsRed/GFP imaging, is delayed over actual splicing outcomes (Fig. [1c--e](#Fig1){ref-type="fig"}). Quantitative imaging of gastrocnemius muscle cryosections revealed high GFP expression at Day 7, during active muscle regeneration, and low GFP expression by Day 21, when formation of new muscle fibers is largely complete (Fig. [1f](#Fig1){ref-type="fig"}). DsRed fluorescence was concentrated in muscle fiber cytoplasm, while GFP fluorescence was confined to myonuclei, similar to the localization after intramuscular injections of plasmid DNA (Fig. [1f](#Fig1){ref-type="fig"}).
Bi-transgenic mouse model of DM1 {#Sec6}
--------------------------------
To test the TR construct as a splicing biomarker for DM1, we crossed TR transgenic mice with HSA^LR^ transgenic mice (both on the FVB background) to produce homozygous TR^+/+^;HSA^LR+/+^ bi-transgenic mice (Supplementary Fig. [3](#MOESM1){ref-type="media"}; Supplementary Table [1](#MOESM1){ref-type="media"}). Quantitative in vivo fluorescence imaging of gastrocnemius and lumbar paraspinal muscles of TR;HSA^LR^ bi-transgenic mice revealed a low DsRed/GFP ratio, as compared to TR single transgenic mice (Fig. [2a, b](#Fig2){ref-type="fig"}). RT-PCR analysis of muscle tissue confirmed the splice switch in TR;HSA^LR^ mice, while higher GFP expression also was evident in bi-transgenic mice by Western blot (Fig. [2c--e](#Fig2){ref-type="fig"}). Quantitative fluorescence microscopy of muscle cryosections demonstrated bright DsRed and low GFP expression in TR single transgenic, and low DsRed and high GFP expression in TR;HSA^LR^ bi-transgenic mice (Fig. [2f](#Fig2){ref-type="fig"}). Muscle histology in TR transgenic mice was similar to wild-type, and TR;HSA^LR^ bi-transgenic mice appeared similar to *HSA*^LR^ single transgenic mice (Fig. [2f](#Fig2){ref-type="fig"}).Fig. 2TR;HSA^LR^ bi-transgenic mice as a therapy reporter model for DM1. In the HSA^LR^ transgenic mouse model of DM1, exclusion of *Atp2a1* exon 22 is upregulated. To validate the TR-*ATP2A1* splice switch as a reporter for DM1, we crossed the TR transgenic with the *HSA*^LR^ transgenic model to create TR;*HSA*^LR^ bi-transgenic mice. **a** In vivo imaging of DsRed and GFP fluorescence in TR transgenic and TR;*HSA*^LR^ bi-transgenic mice (IVIS Spectrum) under general anesthesia in the prone position. The scale shows radiant efficiency (photons/second/cm2). Bars = 5 mm. **b** DsRed and GFP fluorescence measured in regions of interest in bilateral gastrocnemius (Gast) and paraspinal (P-sp) muscles of TR (*N* = 4) and TR;HSA^LR^ mice (*N* = 3). Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001 TR vs. TR;*HSA*^LR^ (one-way ANOVA). **c** Splicing analysis in gastrocnemius muscle by RT-PCR of the TR transgene (Tg) exon 22 (ex 22) and endogenous *Atp2a1* ex 22. *N* = 3 each group. WT FVB wild-type. **d** Quantitation of splicing results in **c**, displayed as the percentage of exon 22 inclusion. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001 (two-way ANOVA). **e** Western blot of GFP protein expression in gastrocnemius and paraspinal muscles of TR and TR;HSA^LR^ mice (*N* = 3 each group), with GAPDH as loading control. kD kilodaltons. **f** Representative DsRed and GFP expression in TR (upper row) and TR;HSA^LR^ (lower row) gastrocnemius muscle cryosections. Fluorescence images are the extended focus of deconvolved Z-series. Fluorescence intensity range = 0--5080 grayscale units. Laminin (yellow) and DAPI (blue) highlight muscle fibers and nuclei, respectively. Merge DsRed (red) + GFP (green) + laminin + DAPI, H&E hemotoxylin and eosin. Bars = 50 μm
TR construct as a pharmacodynamic biomarker in DM1 mice {#Sec7}
-------------------------------------------------------
Antisense oligonucleotides (ASOs) are modified nucleic acid molecules that bind to RNA through Watson--Crick bonds. The chemistry and binding target of each ASO determine therapeutic effects^[@CR18]^. In DM1 transgenic mice, ASOs that target the RNA-mediated disease process can reverse RNA mis-splicing, eliminate myotonia, and slow myopathy progression by reducing the pathogenic effects of expanded CUG repeat (CUG^exp^) RNA^[@CR7],[@CR8]^. One approach for treatment of DM1 involves ASOs or small molecules that are designed to bind directly to the CUG^exp^ RNA and reduce pathogenic interactions with MBNL proteins by steric inhibition^[@CR7],[@CR19]^. To validate the TR construct as an indicator of therapeutic ASO activity, we injected gastrocnemius muscles of TR;HSA^LR^ bi-transgenic mice with saline or 20 μg of a dendrimer-modified CAG repeat morpholino ASO (CAG25)^[@CR7]^ (Fig. [3](#Fig3){ref-type="fig"}). Serial in vivo imaging demonstrated an increase of the DsRed/GFP fluorescence ratio in ASO-treated muscles by Day 3 after injection, which persisted for at least several weeks after injection (Fig. [3a, b](#Fig3){ref-type="fig"}). Splicing analysis by RT-PCR of muscle tissue harvested at Day 49 confirmed correction of the splicing in muscles treated with ASOs (Fig. [3c, d](#Fig3){ref-type="fig"}).Fig. 3Imaging therapeutic antisense oligonucleotide drug activity in vivo. We administered a CAG repeat morpholino (CAG) ASO designed to bind and neutralize the pathogenic effects of expanded CUG repeat RNA^[@CR7]^ by intramuscular injection of the right gastrocnemius in TR;HSA^LR^ mice (*N* = 3). The left gastrocnemius was injected with saline or was untreated. **a** Representative images of quantitative in vivo DsRed and GFP fluorescence under general anesthesia 28 days after injection of the CAG ASO. Fluorescence intensity range = 0--4095 grayscale units. Bars = 4 mm. **b** Quantitative DsRed/GFP fluorescence ratios in muscles of untreated (black circles) and CAG ASO-treated (blue triangles) mice by serial in vivo imaging. Error bars indicate mean ± s.e.m. Non-linear regression. **c** RT-PCR analysis of transgene and endogenous *Atp2a1* exon 22 in untreated (−) or CAG ASO-treated ( + ) muscles 49 days after injection. E empty lane, L DNA ladder, bp base pairs. **d** Quantitation of splicing results in **c**. Error bars indicate mean ± s.e.m. \*\*\**P* = 0.0002; \**P* = mean difference 14.2, 95% CI of difference 1.0--27.5 (two-way ANOVA). These results are representative of four independent experiments
In vivo spectroscopy to monitor splicing correction {#Sec8}
---------------------------------------------------
In vivo molecular imaging using a fluorescence microscope or an IVIS system allows rapid visualization of heterogeneous gene expression in muscle tissue. Laser-based spectroscopy using dedicated optical fibers for excitation and emission is an alternative approach for in vivo fluorescence measurements that may offer a more sensitive ratiometric analysis than imaging techniques, due in part to more accurate separation of overlapping emission spectra, such as GFP and DsRed, in the presence of an autofluorescence background that is typical of biological tissues. The use of a dedicated spectroscopy system with laser sources for excitation also allows for the detection of relatively small quantities of the fluorophores being expressed and more accurate tracking of disease progression or regression, even at very early and late stages.
A laser-excitation-based fluorescence spectroscopy system was constructed that is similar to the general design of previously reported spectroscopy instruments^[@CR20]--[@CR22]^. For in vivo detection of emission spectra, the system includes a 488 nm laser (GFP), a 532 nm laser (DsRed), separate optical fibers for excitation and emission detection, and a portable spectrometer (Fig. [4a, b](#Fig4){ref-type="fig"}; Supplementary Table [3](#MOESM1){ref-type="media"}). A broadband white light source enables white light spectroscopy for correction of detected fluorescence spectra for any effects of varying tissue optical properties.^[@CR23]^ This correction is vital, as it ensures that changes in detected fluorescence are due only to changes in fluorophores rather than the effects of changes in the intervening muscle tissue due to ASO or other drug treatments. Spectral analysis using singular value decomposition (SVD) isolates the GFP and DsRed spectra and removes autofluorescence^[@CR24]--[@CR26]^, thereby enabling a rigorous estimate of the relative amplitudes of GFP and DsRed. The entire spectroscopy system is enclosed and controlled via LabVIEW software.Fig. 4Calibration of in vivo fluorescence spectroscopy measurements. To calibrate our custom built spectroscopy system, we injected DsRed plasmid DNA into the left TA and GFP plasmid DNA into the right TA of FVB WT mice (*N* = 2), and measured fluorescence by microscopy and spectroscopy. **a** Diagram of the spectroscopy system. Three light sources are used: a 488 nm laser for excitation of GFP, a 532 nm laser for excitation of DsRed, and white light for measurement of reflectance. Separate optical fibers are used for detection of fluorescence excitation and emission. **b** Photograph of a mouse under general anesthesia (isoflurane) in the prone position for spectral measurements of the paraspinal or gastrocnemius muscles. **c** Brightfield and fluorescence microscopy images of the TA muscles in an anesthetized mouse 7 days after injection of each plasmid. The mouse is supine. Bars = 4 mm. **d** Representative uncorrected fluorescence spectra using the 488 nm laser (upper left) and the 532 nm laser (upper middle), and white light reflectance (upper right) in TA muscles injected with GFP (black circles) and DsRed (blue triangles) plasmids, or received no plasmid (yellow diamonds). Corrected fluorescence spectra (lower left and lower middle) are calculated by subtraction of background values, division by reflectance values, normalization of laser power, and the use of singular value decomposition (SVD) to separate the GFP and DsRed spectra and remove autofluorescence (Methods). DsRed/GFP ratios (lower right) are calculated automatically by LabView software using peak SVD corrected DsRed emission with the 532 nm laser and GFP emission with the 488 nm laser. \*\*\**P* = 0.0002; one-way ANOVA. **e** We used spectroscopy to measure in vivo fluorescence in gastrocnemius muscles of TR;HSA^LR^ bi-transgenic (black circles) (*N* = 4 mice; 8 muscles total), TR transgenic (blue triangles) (*N* = 5 mice; 10 muscles total), and WT control (yellow diamonds) (*N* = 1 mouse; 2 muscles total). Shown are representative uncorrected fluorescence, reflectance, and corrected fluorescence spectra of each, and DsRed/GFP ratios for all muscles examined. For comparison, in vivo fluorescence microscopy data are shown in Supplementary Figure [5](#MOESM1){ref-type="media"}. \*\*\*\**P* \< 0.0001; one-way ANOVA
To test and calibrate the spectroscopy system for in vivo measurements, we injected TA muscles of wild-type mice with GFP or DsRed plasmid DNA driven by the same muscle-specific HSA promoter^[@CR14]^ that was used for expression of the TR transgene (Fig. [4c](#Fig4){ref-type="fig"}; Supplementary Fig. [5](#MOESM1){ref-type="media"}). Spectral scans of each injected TA muscle using 488 nm laser, 532 nm laser, white light, and background measurements, without laser power correction, were obtained sequentially using custom LabVIEW software. Automated removal of background and autofluorescence from the raw data, and correction for reflectance and laser power, enabled precise determination of GFP and DsRed spectra (Fig. [4d](#Fig4){ref-type="fig"}). DsRed/GFP values were determined automatically by dividing the fitted spectral amplitude of corrected DsRed fluorescence using the 532 nm laser by the corrected GFP fluorescence using the 488 nm laser (Fig. [4d](#Fig4){ref-type="fig"}). Using the pure DsRed and GFP spectra obtained using the plasmid DNA injections, we measured spectra in gastrocnemius and lumbar paraspinal muscles of TR transgenic, TR;HSA^LR^ bi-transgenic, and wild-type mice (Fig. [4e](#Fig4){ref-type="fig"}; Supplementary Fig. [5](#MOESM1){ref-type="media"}). DsRed/GFP measurements by spectroscopy appeared similar to values obtained by fluorescence microscopy and showed strong correlation (Fig. [4d, e](#Fig4){ref-type="fig"}; Supplementary Fig. [5](#MOESM1){ref-type="media"}).
Comparison of in vivo fluorescence measurements {#Sec9}
-----------------------------------------------
Systemic delivery of an ASO designed to induce RNase H cleavage of pathogenic transcripts can correct splicing defects and reverse the phenotype in HSA^LR^ mice by targeting *ACTA1*-CUG^exp^ transcripts^[@CR8]^. To compare in vivo fluorescence microscopy and spectroscopy measurements of ASO activity, we treated TR;HSA^LR^ mice with the ASO from the previous study^[@CR8]^, 445236, by subcutaneous injection and performed weekly fluorescence measurements using each method. Based on DsRed/GFP values, a therapeutic effect was evident in gastrocnemius and lumbar paraspinal muscles as early as Day 14 (four total doses) using each method, and became progressively more pronounced, continuing even after the eighth and final dose on Day 25 (Fig. [5a, b](#Fig5){ref-type="fig"}; Supplementary Fig. [6](#MOESM1){ref-type="media"}). Although the precise DsRed/GFP values obtained by spectroscopy and microscopy were different, the measurements showed a strong correlation throughout the treatment course (Fig. [5c](#Fig5){ref-type="fig"}). RT-PCR analysis of muscles harvested at Day 42 confirmed exon 22 splicing correction of both the transgene *ATP2A1* and endogenous *Atp2a1* (Fig. [5d, e](#Fig5){ref-type="fig"}). Automated spectroscopy measurements of DsRed/GFP ratios saved considerable analysis time over manual drawing of ROIs, background subtraction, and calculation of ratios that are required with imaging (Table [1](#Tab1){ref-type="table"}).Fig. 5Comparison of in vivo fluorescence microscopy and spectroscopy. We treated TR;HSA^LR^ bi-transgenic mice with saline or ASO 445236 (*N* = 2 each) by subcutaneous injection (25 mg/kg twice weekly for 4 weeks)^[@CR8]^ and monitored DsRed and GFP fluorescence using in vivo spectroscopy and microscopy for 6 weeks. **a** Representative corrected fluorescence spectra, excited at 488 nm and 532 nm in gastrocnemius (left) and lumbar paraspinal (right) muscle 42 days after beginning treatment with saline (black circles) or ASO (blue triangles). **b** Serial in vivo fluorescence spectroscopy and microscopy measurements through 42 days showing ratio of fitted DsRed fluorescence magnitude to fitted GFP fluorescence magnitude in gastrocnemius and lumbar paraspinal muscles of TR;HSA^LR^ bi-transgenic mice treated with saline (black circles) or ASO (blue triangles). Error bars indicate mean ± s.e.m. Non-linear regression. **c** Correlation of DsRed/GFP values as measured by spectroscopy (*x*-axis) and microscopy (*y*-axis) at each time point. The correlation coefficient *r* and *P* value are shown. **d** Exon 22 splicing analysis by RT-PCR of transgene (Tg) and endogenous mouse *Atp2a1* in gastrocnemius and lumbar paraspinal muscles of saline- and ASO-treated mice at Day 42. L DNA ladder, bp base pairs. **e** Quantitation of the splicing results in **d**. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001, saline vs. ASO; two-way ANOVATable 1The time required, in minutes (min), to quantitate the DsRed/GFP ratio using spectroscopy or microscopy in left and right gastrocnemius and left and right lumbar paraspinal muscles of *N* = 4 mice (see Fig. [5](#Fig5){ref-type="fig"})Imaging daySpectroscopy (min)Microscopy: user 1 (min)Microscopy: user 2 (min)Microscopy: mean (min)280342730.5350292627.5420262123.5All spectroscopy values are zero because the data acquisition software (LabVIEW) corrects for laser power and acquisition time, subtracts background fluorescence, and calculates each ratio automatically at the end of each scan. After acquisition of microscopy images, quantitation of fluorescence requires manual drawing of regions of interest (ROIs), export of the data from each image into a comma separated values (CSV) file, correction for exposure time, subtraction of background fluorescence, and calculation of DsRed/GFP fluorescence ratio (see Supplementary Fig. [3](#MOESM1){ref-type="media"}). Values in minutes for two separate users, and the mean of the two, are shown for the images obtained on Days 28, 35, and 42.
Novel ligand-conjugated antisense oligonucleotide {#Sec10}
-------------------------------------------------
Although ASOs are effective in skeletal muscle, drug activity is less robust than in other tissues such as liver, which may be due to a combination of poor tissue bioavailability and insufficient potency^[@CR27]^. Ligand-conjugated antisense (LICA) chemistry adds specific conjugates to ASOs that are designed to increase drug uptake. In a recent clinical trial, a LICA oligo targeting apolipoprotein(a) transcripts in the liver was several-fold more potent than the unconjugated parent ASO, enabling a \>10-fold lower dose and improving tolerability^[@CR28]^. To determine whether this approach could be adapted for use in muscle tissue, we tested a LICA modified ASO, 992948, that contains a C16 fatty acid conjugate and targets the identical sequence in *ACTA1*-CUG^exp^ transcripts in the HSA^LR^ mouse model as the unconjugated parent ASO 445236 described in the previous study^[@CR8]^. Using serial in vivo spectroscopy measurements, the LICA oligo (25 mg/kg twice weekly for 4 weeks; eight total doses = 200 mg/kg total ASO) demonstrated an increase in DsRed/GFP ratios in gastrocnemius and lumbar paraspinal muscles as early as Day 7, after only two doses (Fig. [6a](#Fig6){ref-type="fig"}; Supplementary Fig. [7](#MOESM1){ref-type="media"}). Activity of the LICA oligo was dose-dependent in both muscles, although was evident earlier in lumbar paraspinal muscles than in gastrocnemius. In the gastrocnemius, peak DsRed/GFP values were obtained by Day 21 (six doses), but continued to increase in lumbar paraspinal muscles until maximum effect was observed at Day 35, or 10 days after the eighth and final dose. Subcutaneous injection of saline had no effect. Due to an absence of drug activity by Day 28 in gastrocnemius muscles of mice treated with the 2.5 mg/kg and 8.3 mg/kg doses, we continued treatment of these mice by administering an additional 4 doses over the next 2 weeks, for a total of 12 doses (8.3 mg/kg×12 doses = 99.6 mg/kg total; 2.5 mg/kg×12 doses = 30 mg/kg total). By Day 42, DsRed/GFP values in gastrocnemius muscles of mice treated with the 8.3 mg/kg dose were increased as compared to saline-treated mice. Droplet digital PCR (ddPCR) quantitation of *ACTA1*-CUG^exp^ transcript levels in muscles collected at Day 42 demonstrated a dose-dependent reduction of the ASO target (Fig. [6b](#Fig6){ref-type="fig"}). Inclusion of transgene (Tg) and endogenous mouse *Atp2a1* exon 22 by RT-PCR also showed dose-dependent correction (Fig. [6c, d](#Fig6){ref-type="fig"}), similar to the correction of DsRed/GFP values evident by spectroscopy. Splicing of several additional transcripts also showed dose-dependent correction in treated mice (Supplementary Fig. [8](#MOESM1){ref-type="media"}).Fig. 6In vivo activity of a novel ligand-conjugated antisense (LICA) oligonucleotide. We treated TR;*HSA*^LR^ bi-transgenic mice with saline or a novel LICA oligonucleotide, ASO 992948, targeting *ACTA1*-CUG^exp^ transcripts and measured DsRed and GFP fluorescence by serial in vivo spectroscopy. LICA-oligo doses were 2.5, 8.3, or 25 mg/kg twice weekly for 4 weeks (8 total doses) by subcutaneous injection (*N* = 2 each group). However, due to low DsRed/GFP fluorescence measurements in gastrocnemius muscles, mice receiving the 2.5 mg/kg and 8.3 mg/kg doses received four additional injections over the next 2 weeks, for a total of 12 doses. **a** Serial DsRed/GFP measurements in gastrocnemius and lumbar paraspinal muscles through Day 42. The final dose in the saline (black circles) and 25 mg/kg groups (blue diamonds) was Day 24, while the final dose in the 2.5 (yellow triangles) and 8.3 mg/kg (gray triangles) groups was Day 37. Error bars indicate mean ± s.e.m. **b** Quantitation of *ACTA1*-CUG^exp^ transcript levels (copies per microliter cDNA) by droplet digital PCR in muscles collected at Day 42. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001; two-way ANOVA. **c** Splicing analysis by RT-PCR in gastrocnemius and lumbar paraspinal muscles collected at Day 42. L DNA ladder, bp base pairs. **d** Quantitation of splicing results in **c**. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001, \*\*\**P* \< 0.001; two-way ANOVA
To compare drug target engagement in muscle tissue of an ASO with and without a LICA-modification, we treated TR;HSA^LR^ mice with 12.5 mg/kg twice weekly for 4 weeks of either LICA oligonucleotide 992948 or the unconjugated parent ASO 445236 vs. saline, and monitored fluorescence by serial in vivo spectroscopy. In mice treated with the LICA oligo, an increase of DsRed/GFP measurements was evident in gastrocnemius muscles by Day 14 and in lumbar paraspinal muscles by Day 7, while in mice treated with the unconjugated ASO, the increase of DsRed/GFP values was delayed to Day 28 in gastrocnemius and Day 14 in lumbar paraspinal muscles (Fig. [7a, b](#Fig7){ref-type="fig"}). Using ddPCR, knockdown of *ACTA1* transcript levels measured at Day 28 was significantly greater in gastrocnemius, lumbar paraspinal, and quadriceps muscles of mice treated with the LICA oligo than in mice treated with the unconjugated ASO (Fig. [7c](#Fig7){ref-type="fig"}). Similarly, RT-PCR analysis demonstrated that exon 22 inclusion of the transgene *ATP2A1* and endogenous mouse *Atp2a1* was significantly higher in gastrocnemius and lumbar paraspinal muscles examined of mice treated with the LICA oligo as compared to the unconjugated ASO (Fig. [7d, e](#Fig7){ref-type="fig"}). Differential activity between the LICA and unconjugated ASO was dose-dependent and maintained through Day 42 of treatment (Supplementary Fig. [9](#MOESM1){ref-type="media"}). In the lumbar paraspinal muscles, the 12.5 mg/kg dose of the LICA oligo demonstrated pharmacological activity that was equal to or better than the 25 mg/kg dose of the unconjugated ASO throughout the course of treatment, suggesting it is at least two-fold more potent (Supplementary Fig. [9](#MOESM1){ref-type="media"}). Fluorescence spectroscopy measurements of DsRed/GFP ratios in gastrocnemius and lumbar paraspinal muscles showed good correlation with RT-PCR analysis exon 22 inclusion of both the transgene *ATP2A1* and endogenous mouse *Atp2a1* in these muscles (Fig. [8a, b](#Fig8){ref-type="fig"}).Fig. 7In vivo comparison of LICA and the unconjugated parent ASO. We treated TR;*HSA*^LR^ mice with LICA oligo 992948 (LICA) or ASO 445236 (ASO), which is the unconjugated parent of LICA oligo 992948 that targets the identical *ACTA1* sequence^[@CR8]^. ASOs were administered by subcutaneous injection of 12.5 mg/kg twice weekly for 4 weeks (*N* = 3 each) (eight total doses). Treatment with saline (*N* = 1) served as a control. **a** DsRed/GFP quantitative fluorescence in gastrocnemius (left) and lumbar paraspinal muscles (right) by serial in vivo spectroscopy in mice treated with saline (black circles), ASO (yellow triangles), or LICA (blue diamonds). Error bars indicate mean ± s.e.m. **b** Due to the higher baseline DsRed/GFP values in the gastrocnemius of mice randomized to receive the unconjugated ASO, we normalized each measurement of quantitative fluorescence in the gastrocnemius (left) and lumbar paraspinal muscles (right) to the Day 0 value, so that the Day 0 value for each muscle = 1. Error bars indicate mean ± s.e.m. **c** ddPCR quantitation of *ACTA1* transcripts (copies per microliter cDNA) in muscles collected at imaging Day 28. \*\*\*\**P* \< 0.0001; \*\**P* \< 0.01; two-way ANOVA. **d** RT-PCR analysis of exon 22 alternative splicing of the transgene (Tg) and mouse *Atp2a1* in gastrocnemius, lumbar paraspinal, quadriceps, and tibialis anterior (TA) muscles collected on imaging Day 28. FVB WT gastrocnemius served as a control. L DNA ladder, bp base pairs. **e** Quantitation of splicing results in **d**. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001; \*\**P* \< 0.01; \**P* \< 0.05; two-way ANOVAFig. 8Correlation of DsRed/GFP measurements with splicing in muscle tissue. We used fluorescence spectroscopy to determine the DsRed/GFP in gastrocnemius and lumbar paraspinal muscles of TR;HSA^LR^ bi-transgenic mice treated by subcutaneous injection of saline, unconjugated ASO, or LICA oligo (*N* = 28) (see Figs. [6](#Fig6){ref-type="fig"} and [7](#Fig7){ref-type="fig"}, and Supplementary Figs. 7[--9](#MOESM1){ref-type="media"}). After the final spectroscopy measurements at either Day 28 or 42, we dissected the gastrocnemius and lumbar paraspinal muscle tissues. To validate the spectroscopy measurements as estimations of alternative splicing in the entire muscle, we correlated final DsRed/GFP measurements in gastrocnemius and lumbar paraspinal muscles with RT-PCR determination of exon 22 inclusion of **a** the *ATP2A1* transgene (Tg) and **b** endogenous mouse *Atp2a1* in these muscles. The correlation coefficient *r* and *P* value for each are shown
ASO drug concentrations in muscle and liver correlated with total dose administered, were significantly higher in paraspinal muscles than in gastrocnemius muscles, and were higher in liver than in either gastrocnemius or paraspinal muscles (Supplementary Fig. [9](#MOESM1){ref-type="media"}). To determine whether ASO drug activity and concentration in muscles may be related to vascular supply, we examined capillary density and found that paraspinal muscles have higher number of capillaries per muscle fiber than gastrocnemius muscles (Fig. [9a, b](#Fig9){ref-type="fig"}). Unexpectedly, we also found that capillary density was greater in saline-treated TR;HSA^LR^ mice than in WT controls, and that ASO treatment for 42 days was associated with a reduction of capillary density toward WT values (Fig. [9a, b](#Fig9){ref-type="fig"}). Histologic analysis of ASO-treated muscles showed a decrease in the percentage of muscle fibers containing internal nuclei, an improvement of muscle fiber diameter measurements toward the WT pattern, and no evidence of toxicity (Fig. [9c, d](#Fig9){ref-type="fig"}).Fig. 9LICA oligonucleotide treatment effects. **a** We examined capillary density in TR;HSA^LR^ mice treated with saline (*N* = 5) or LICA oligonucleotide 992948 (25 mg/kg twice weekly for 4 weeks; *N* = 4) by immunolabeling using an anti-CD31 antibody. Untreated FVB wild-type (WT) mice (*N* = 6) served as controls. Shown are representative images of gastrocnemius muscle cryosections. Muscle fibers are highlighted with Alexa 647-wheat germ agglutinin (WGA; pseudocolored yellow) and nuclei with DAPI (blue). Merge DsRed (red) + GFP (green) + WGA + DAPI. Fluorescence images are the extended focus of deconvolved Z-series. Intensity scale = 0--15,000 grayscale units (CD31). Bars = 50 μm. **b** Quantitation of capillary density, as measured by the number of capillaries per 100 gastrocnemius or paraspinal muscle fibers, in TR;HSA^LR^ mice treated with saline or LICA oligo, and untreated WT controls. Error bars indicate mean ± s.e.m. \*\*\*\**P* \< 0.0001, overall difference between saline-treated, LICA-treated, and WT groups; \*\*\**P* \< 0.001 (gastrocnemius, saline-treated vs. WT); \*\**P* \< 0.01 (paraspinal, saline-treated vs. WT), and *P* = 0.0012, (overall difference between gastrocnemius and paraspinal); \**P* \< 0.05 (gastrocnemius, saline-treated vs. LICA-treated); two-way ANOVA. **c** Quantitation of internal nuclei frequency in the saline-treated, LICA-treated, and WT groups. Error bars indicate mean ± s.e.m. \*\**P* = 0.0015 saline vs. treated group; two-way ANOVA. **d** Minimum Feret's diameter, defined as the minimum distance between parallel tangents^[@CR62]^, of gastrocnemius muscle fibers of TR mice treated with saline (black circles) or the LICA oligo (blue triangles), and untreated wild-type (WT) controls (yellow diamonds). Error bars indicate mean ± s.e.m
Discussion {#Sec11}
==========
We describe a novel TR;HSA^LR^ bi-transgenic mouse model that expresses an alternative splicing reporter of DM1 disease activity and enables a convenient non-invasive estimation of in vivo pharmacodynamic properties. This model will be useful for rapid identification of candidate therapeutics for reducing pathogenicity of CUG^exp^ transcripts in DM1, including new ASO chemistries and conjugates, small molecules, short interfering RNAs (siRNAs), gene therapy vectors for the production of antisense RNAs, protein-based therapies that rescue aberrant splicing, and gene editing approaches that reduce the genomic CTG repeat length or inhibit transcription of CUG^exp^ repeats^[@CR29]--[@CR39]^. This model also is well suited to meet an equally important goal of drug development tools: rejection of failed drugs and therapeutic strategies at an early stage, thereby saving valuable resources before proceeding to costly clinical trials. We propose that the TR;HSA^LR^ bi-transgenic model replace the HSA^LR^ single transgenic model for in vivo screening of new candidate therapeutics because it has all the advantages of that model, while also reducing the time needed to identify candidate therapeutics with the most promise to proceed to clinical trials, and meeting an Institutional Animal Care and Use Committee (IACUC) ethical goal of limiting the number of mice needed.
This binary splicing switch determines the expression of either DsRed or GFP, and provides a highly sensitive and quantitative measure of splicing outcomes by the ratio of these fluorescent proteins, independent of overall reporter gene expression level. Regarding precision, the fluorescence readout is a ratio of two signals (DsRed vs. GFP) both generated from the same construct. This design eliminates an important source of biological variation (differences in level of reporter gene expression) and reduces measurement error (a ratio of two fluorescence signals is less dependent on background and detection than absolute levels of a single signal). Regarding validity, the fluorescence readout directly reports developmental alternative splicing, the normal outcome in muscle repair^[@CR16]^ and the fundamental biochemical derangement in DM1, which is clearly linked to production of symptoms^[@CR5],[@CR6],[@CR40],[@CR41]^. Regarding throughput, the fluorescence measurements enable a short and simple assay. The restriction of DsRed protein expression to the cytoplasm and GFP expression to myonuclei suggests the possibility that the *N*-terminal peptide sequence upstream of GFP may act as a nuclear localizing signal.
In this study, we also demonstrate that the TR-*ATP2A1* construct enables non-invasive tracking of muscle regeneration in live mice, suggesting it also can serve as a more general biomarker of muscle regeneration, and potentially for muscle repair in other muscular dystrophies beyond DM1. The delayed upregulation of the DsRed/GFP ratio following muscle injury in TR mice may be related to the transition from immature DsRed monomers that fluoresce mostly green to mature tetramers that fluoresce red, a process that takes several days^[@CR42]^. The similarity of splicing changes in DM1 and DM2^[@CR5]^ suggests that the TR model could be used as a drug development tool for DM2 as soon as a robust mouse model of DM2 becomes available.
Results obtained using in vivo fluorescence microscopy and spectroscopy showed strong correlation, suggesting that either method can be used for non-invasive detection of therapeutic response. A major advantage of our fluorescence spectroscopy system over conventional fluorescence microscopy is the speed of analysis: the quantitative DsRed/GFP ratios, with all corrections for background, autofluorescence, and SVD fitting, are calculated automatically at the end of each spectroscopy scan. By contrast, quantitation of fluorescence in microscopy images is calculated by hand, which involves the drawing of individual ROIs, subtraction of background from each channel, correction for image exposure time, and calculation of DsRed/GFP ratios, which typically takes several minutes per mouse. As a result, throughput is significantly higher with spectroscopy.
Advantages of this in vivo spectroscopy system over similar earlier instruments^[@CR24]--[@CR26]^ include greater sensitivity due to dedicated GFP and DsRed lasers, and faster data acquisition and analysis due to automated control of filter and fiber switching. Fluorescence spectroscopy also offers a more sensitive ratiometric analysis than imaging approaches, due in part to more accurate separation of overlapping emission spectra (such as GFP and DsRed) in the presence of an autofluorescence background. An additional advantage of spectroscopy includes improved signal-to-noise ratio for detection of low DsRed and GFP levels at very early and late stages of the treatment. Consequently, we expect this spectroscopy system may enable more accurate tracking of disease progression or regression by detection of relatively low levels of fluorophores that may be beyond detection by fluorescence imaging. The greater sensitivity of spectroscopy also may broaden the potential use of TR constructs as biomarkers for other applications. For example, by modifying the minigene, the TR construct may be useful as a drug development tool for other disorders that are candidates for RNA modulation therapies beyond DM1, including Duchenne muscular dystrophy, Hutchinson-Gilford progeria syndrome, and amyotrophic lateral sclerosis^[@CR43]--[@CR45]^.
In this study, we also show that a gapmer C16 fatty acid LICA oligonucleotide achieves efficient dose-dependent target knockdown in skeletal muscle and that concentrations of the LICA oligo in muscle tissue were approximately two-fold higher than the unconjugated parent ASO after a 4-week course of treatment. The more rapid increases in DsRed/GFP values and the more robust splicing correction in mice treated with the LICA oligo suggests that the C16 ligand enables greater target engagement in muscle tissue than the unconjugated ASO. Non-invasive measurements also revealed a more rapid ASO target engagement in paraspinal muscles as compared to gastrocnemius muscles, which may be related to the higher ASO concentrations and the greater capillary density that we observed in paraspinal vs. gastrocnemius muscles. The greater capillary density in TR;HSA^LR^ than WT mice may be related to dysregulation of genes involved in angiogenesis and blood vessel maturation^[@CR46]^, while the reduction of capillary density in ASO-treated mice may enable a novel measure of therapeutic response in muscle tissue. Our data support further development of LICA technology for treatment of DM1 and other for ASO applications targeting skeletal muscle.
Methods {#Sec12}
=======
Alternative splicing therapy reporter (TR) construct {#Sec13}
----------------------------------------------------
We modified the bi-chromatic RG6 construct^[@CR11]^ (a gift of Dr. T. Cooper) by replacing the chicken *TNNT2* exon 5 minigene with a human *ATP2A1* exon 22 minigene into the *Xba* I/*Age* I site to create the fluorescence bi-chromatic FBR-*ATP2A1* reporter construct. To bypass a stop codon and enable shift of the reading frame^[@CR13]^, exon 22 of the *ATP2A1* minigene was modified by site-directed mutagenesis to induce a single base pair deletion so that it contains 41 base pairs instead of 42 (a gift of Dr. C. Thornton). In designing the construct for in vivo use, we were concerned with several previous reports of dose-dependent in vivo toxicity of GFP expression in skeletal and cardiac muscle, whether by transgenic expression or viral vector-mediated delivery^[@CR47]--[@CR53]^, as well as unpublished observations (T.M.W.). To reduce the possibility that therapeutic ASOs, or other treatments inhibiting CUG^exp^ RNA pathogenicity, could exacerbate the myopathy by upregulating GFP expression, the reading frame was designed so that, in contrast to the chicken *TNNT2* exon 5 minigene in the RG6 construct, inclusion of the *ATP2A1* minigene exon 22, which is high in normal mature muscle, would result in the DsRed reading frame instead of GFP (Fig. [1a](#Fig1){ref-type="fig"}).
Next, we removed the CMV promoter, FLAG and NLS regions using a *Bgl* II/*Xba* I double digest and replaced it with a custom linker containing *Cla* I, *Not* I, Kozak, ATG, and FLAG sequences. To restrict expression of the construct to skeletal muscle, we removed the human skeletal actin (HSA) promoter/enhancer/vp1 splice acceptor region^[@CR14]^ from the pBSx-HSAvpA plasmid (a gift of Dr. J. Chamberlain via Dr. A. Burghes) and cloned it into the *Cla* I/*Not* I site to create the new HSA-FLAG-ATP2A1ex22-DsRed-GFP TR reporter construct. The sequence of the TR construct is shown in Supplementary Note [1](#MOESM1){ref-type="media"}.
Experimental mice {#Sec14}
-----------------
Institutional Animal Care and Use Committees (IACUCs) at Massachusetts General Hospital and University of Rochester approved all studies in mice described here. The Human Skeletal Actin - Long Repeat (*HSA*^LR^) mouse model of DM1 expresses a human *ACTA1* transgene that contains \~220 CTG repeats in the 3′ untranslated region^[@CR3]^. TR reporter transgenic mice were bred with *HSA*^LR^ transgenic mice, both on the FVB background, to create TR;*HSA*^LR^ bi-transgenic mice. FVB wild-type mice served as controls.
Intramuscular injection of plasmid DNA {#Sec15}
--------------------------------------
At each stage of cloning, we tested constructs for in vivo expression by intramuscular injection and electroporation^[@CR6]^ of plasmid DNA (20 or 25 μg) in the tibialis anterior (TA) muscle of *HSA*^LR^ and wild-type mice, and determined gene expression by non-invasive in vivo fluorescence microscopy, splicing of TR-*ATP2A1* and endogenous *Atp2a1* mouse transcripts by RT-PCR, and examination of muscle cryosections by fluorescence microscopy.
Generation of novel TR transgenic mice {#Sec16}
--------------------------------------
We digested the TR plasmid with *Bgl* II/*Sph* I restriction enzymes, gel extracted, and purified the 5.316 kb linear DNA fragment containing the HSA-TR construct. A commercial vendor (Cyagen Biosciences) microinjected HSA-TR linear DNA and generated three transgenic founders on an FVB background, two of which produced offspring. One of these lines expressed the transgene at a level sufficient to visualize DsRED and GFP fluorescence by multiple non-invasive means.
Genotyping offspring {#Sec17}
--------------------
We identified mice by toe clipping^[@CR54],[@CR55]^ on postnatal day 7 and used each toe biopsy to isolate DNA using a tissue/blood DNA isolation kit (Qiagen). To detect the TR transgene, we used PCR analysis of DNA with two sets of genotyping primers, one specific for DsRed, the other for GFP, as follows:
DsRED left primer: 5′-GGCCACAACACCGTGAAGC-3′
DsRED right primer: 5′-CGCCGTCCTCGAAGTTCATC-3′
GFP left primer: 5′-TGCAGTGCTTCAGCCGCTAC-3′
GFP right primer: 5′-CTGCCGTCCTCGATGTTGTG-3′
Breeding of TR hemizygotes (TR^+/−^) with WT mice (TR^−/−^) produced TR^+/−^ offspring at the expected 50% frequency. The absence of detectable in vivo fluorescence in TR^+/−^ hemizygotes required generation of TR homozygotes (TR^+/+^), which were obtained at the expected 25% frequency by breeding TR^+/−^ hemizygotes. To generate TR;*HSA*^LR^ bi-transgenic mice, we crossed TR^ + /−^ mice with homozygous HSA^LR^ mice (HSA^LR+/+^), generating TR^+/−^;HSA^LR+/−^ double hemizygotes, then crossed TR^+/−^ mice with TR^+/−^;HSA^LR+/−^ double hemizygotes, generating TR^+/+^;HSA^LR+/−^, and finally crossed TR^+/+^;HSA^LR+/−^ with TR^+/+^;HSA^LR+/−^ to generate TR^+/+^;HSA^LR+/+^ double homozygous bi-transgenic mice that were used for all imaging experiments. We determined zygosity of the TR transgene using the ratio of DsRed/*Acta1* transcripts, and zygosity of the *ACTA1* transgene using the ratio of *ACTA1*/*Acta1* transcripts, as measured by RT-PCR and quantitative band densitometry (Supplementary Fig. [3](#MOESM1){ref-type="media"}; Supplementary Table [1](#MOESM1){ref-type="media"}).
In vivo fluorescence imaging {#Sec18}
----------------------------
We removed hair (Nair) and performed all in vivo imaging under general anesthesia consisting of either inhalation isoflurane 1--3% to effect or a cocktail of ketamine 100 mg/kg, xylazine 10 mg/kg, and acepromazine 3 mg/kg by intraperitoneal injection^[@CR6].^ We imaged mice using an AxioZoom fluorescence microscope (Zeiss), ×0.5 and ×1.0 objectives, separate filters for GFP (excitation/emission 470/525; Zeiss filter set 38 HE) and DsRed (excitation/emission 550/605; Zeiss filter set 43 HE), an ORCA R2 CCD camera (Hamamatsu), and Volocity image acquisition software (Perkin Elmer). To quantitate fluorescence, we measured DsRed and GFP fluorescence in regions of interest (ROIs) using the Volocity quantitation software module, subtracted background fluorescence, corrected for exposure time, and calculated DsRed/GFP fluorescence ratios. Alternatively, mice were imaged using the IVIS Spectrum (Perkin Elmer) with automatic exposure sequences for excitation/emission 465/520 nm (GFP) and 535/600 nm (DsRed), and quantitated fluorescence in ROIs using Living Image software (Perkin Elmer).
Acute muscle injury {#Sec19}
-------------------
Using a standard muscle injury paradigm^[@CR17]^, we injected 30 μl of 1.2% barium chloride (BaCl~2~) into gastrocnemius muscles of TR transgenic mice and determined DsRED/GFP quantitative fluorescence by serial in vivo imaging. FVB wild-type mice served as non-fluorescent controls. Contralateral gastrocnemius or TA muscles either were untreated or injected with saline.
Antisense oligonucleotides {#Sec20}
--------------------------
We purchased 25-mer CAG repeat morpholino^[@CR7]^ (CAG25; 5′-AGCAGCAGCAGCAGCAGCAGCAGCA-3′) antisense oligonucleotides (ASO) containing phosphorodiamidate internucleotide linkages and an octa-guanidine dendrimer conjugated to the 3′ end of the oligo^[@CR56]^ (Gene Tools, LLC). ASO 445236 and ASO 992948 are 20-mer RNase H-active gapmer ASOs, wherein the central gap segment consists of ten 2′-deoxyribonucleotides that are flanked on the 5′ and 3′ wings by five 2′-*O*-methoxyethyl-modified nucleotides. The sequence of ASO 992948 is identical to ASO 445236, but includes the addition of a C16 lipophilic group conjugated to the 5′ terminus via a hexylamino (HA) linker that has a phosphodiester (P~o~) at the 3′ end. The internucleotide linkages are phosphorothioate, and all cytosine residues are 5′-methylcytosines.
Gapmer ASO sequences {#Sec21}
--------------------
445236: 5′-CCATTTTCTTCCACAGGGCT-3′ (published previously^[@CR8]^)
992948: 5′-C16-HA-P~o~-CCATTTTCTTCCACAGGGCT-3′
We administered ASOs locally into the gastrocnemius and/or lumbar paraspinal muscles of TR;HSA^LR^ bi-transgenic mice by intramuscular injection under general anesthesia without electroporation, or systemically by subcutaneous injection under light restraint^[@CR8]^, and followed treatment effects by serial in vivo fluorescence microscopy and/or spectroscopy under general anesthesia.
Quantification of ASO tissue concentration {#Sec22}
------------------------------------------
ASO was extracted from muscle tissue by phenol-chloroform followed by a solid-phase extraction, and quantified by high performance liquid chromatography coupled with tandem mass spectrometry detection^[@CR57]^.
RNA isolation {#Sec23}
-------------
We homogenized muscles in Trizol (Life Technologies), removed DNA and protein using bromochloropropane, precipitated RNA with isopropanol, washed pellets in 75% ethanol, and dissolved pellets in molecular grade water according to manufacturer recommendations. To determine RNA concentration and quality, we measured A260 and A280 values (Nanodrop) and examined 18 S and 28 S ribosomal RNA bands by agarose gel electrophoresis.
RT-PCR analysis of alternative splicing {#Sec24}
---------------------------------------
We made cDNA using Superscript II reverse transcriptase (Life Technologies) and oligo dT, and performed PCR using Amplitaq Gold (Life Technologies) and gene-specific primers. We separated PCR products using agarose gels, labeled DNA with 1x SYBR I green nucleic acid stain (Life Technologies), and quantitated band intensities using a transilluminator, CCD camera, XcitaBlue^TM^ conversion screen, and Image Lab image acquisition and analysis software (Bio-Rad). We designed primers for TR-*ATP2A1*, *Mbnl2*, and *Vps39* using Primer3 software^[@CR58],[@CR59]^. Primers for *Clcn1*, *m-Titin*, *Clasp1*, *Map3k4*, *Mbnl1*, *Ncor2*, *Nfix*, and endogenous mouse *Atp2a1* were published previously^[@CR2],[@CR6],[@CR60]^. A complete list of primer sequences is shown in Supplementary Table [4](#MOESM1){ref-type="media"}. Uncropped gels are shown in Supplementary Figs. [10--13.](#MOESM1){ref-type="media"}
Microscopy of muscle tissue sections {#Sec25}
------------------------------------
Muscles were frozen in isopentane cooled in liquid nitrogen. We stained 8 μm cryosections with hematoxylin and eosin. To visualize native DsRED and GFP proteins in muscle tissue, we fixed 8 μm cryosections in 3% paraformaldehyde, washed in 1× PBS, counterstained nuclei with DAPI, and mounted with anti-fade medium (Prolong Gold, Invitrogen product \# P36930). To highlight muscle fibers, we counterstained with Alexa 647 wheat germ agglutinin (WGA; 10 μg/ml; Invitrogen product \# W32466) or used an anti-laminin antibody (0.5 μg/ml; Abcam product \# 14055) and a goat anti-chicken Alexa 647 secondary antibody (1 μg/ml; Invitrogen product \# A21449). To identify capillaries, we used an anti-CD31 rabbit monoclonal primary antibody (5.5 μg/ml; Abcam product \# EPR17260--263), a goat anti-rabbit Alexa 546 secondary antibody (1 μg/ml; Invitrogen product \# A11018), counterstained nuclei with DAPI and Alexa 647 WGA (20 μg/ml; Invitrogen product \# W32466), and mounted with anti-fade medium (Prolong Gold). To capture single images or z-series stacks, we used an AxioImager microscope (Zeiss), ×10, ×20, ×40, or ×63 objectives, filters for DAPI (excitation/emission 365/445; Zeiss filter set 49), GFP (excitation/emission 470/525; Zeiss filter set 38), Cy3 (excitation/emission 550/605; Zeiss filter set 43 HE), and Cy5 (excitation/emission640/690; Zeiss filter set 50), a Flash 4.0 LT sCMOS camera (Hamamatsu), a MicroPublisher 3.3 RTV color CCD camera (Q-Imaging), and Volocity image acquisition software. To quantitate fluorescence, we used Volocity quantitation and restoration software modules (Perkin Elmer).
Immunoblotting {#Sec26}
--------------
We isolated protein from 15--20 cryosections of muscle, 30 μm thick, added RIPA detergent buffer supplemented with 1x protease inhibitor cocktail (Sigma), vortexed, centrifuged, removed solubilized protein to a fresh microfuge tube, and determined protein concentration using the Bradford assay. To separate proteins, we used pre-cast any kD polyacrylamide gels (Biorad), transferred to 0.45 μm nitrocellulose membranes, and blocked with 5% milk. Primary antibodies were anti-GFP rabbit monoclonal (1:1000; Cell Signaling product \# 2956) and anti-GAPDH mouse monoclonal (0.1 μg/ml; AbD Serotec product \# MCA4739). Secondary antibodies were goat anti-rabbit IRDye 800CW (0.05 μg/ml; Licor product \# 926--32211) and goat anti-mouse IRDye 680RD (0.05 μg/ml; Licor product \# 926--68070). To determine protein expression, we quantitated band intensities using a laser scanner (Licor Odyssey Cxl) and ImageStudio software (Licor). Uncropped blots are shown in Supplementary Fig. [10](#MOESM1){ref-type="media"}.
In vivo fluorescence spectroscopy {#Sec27}
---------------------------------
In order to measure the GFP and DsRed fluorescence, we constructed a dedicated spectroscopy system. The system consists of sources for fluorescence excitation, a custom optical probe, and a portable spectrometer with large dynamic range and low noise for detection of emission spectra. For excitation, we used diode lasers at 488 nm to excite GFP (OBIS 488 nm LX 50 mW, Coherent Inc., Santa Clara, CA), and at 532 nm to excite DsRed (LMX-532S-50-COL-PP, Oxxius S.A., Lannion, France). The output from each laser is filtered by appropriate laser line filters (LL01--488--12.5 and LL01--532--12.5, Semrock Inc., Rochester, NY) and focused into a fiber switch (F-163--10IR, Piezosystem Jena, Inc., Hopedale, MA) via a fiber coupler (Oz Optics, Ottawa, Ontario). The fiber switch allows for the excitation wavelength to be toggled in order to match the desired fluorophore. The output of the switch is connected to an optical fiber for excitation (M28L01, Thorlabs, Inc., Newton, NJ), with a separate, identical fiber used for collection of emission. The excitation fiber is secured vertically in a custom-machined probe, while the emission fiber is secured to this mount at a 45° angle at a distance of 9.6 mm from the excitation fiber. The emission fiber is connected to a free space laser coupler (Oz Optics) that directs the collected light through a motor-driven filter wheel (FW102C, Thorlabs Inc., Newton, NJ) equipped with long-pass filters tuned to GFP (LP02--488RU-25, Semrock) and DsRed (BLP01--532R-25, Semrock). After filtering, the collected light is focused again into a fiber with a fiber coupler (Oz Optics) and detected by a compact, TE-cooled spectrometer (QE65PRO-FL, Ocean Optics). In addition to the excitation lasers, a broadband white light source (HL-2000, Ocean Optics) is connected to the fiber switch. This allows for performance of white light spectroscopy, which is used to correct detected fluorescence spectra for the effects of varying tissue optical properties^[@CR23]^. This correction is vital, as it ensures that changes in detected fluorescence are due only to changes in fluorophores, rather than the effects of changes in the intervening tissue. A complete list of components is shown in Supplementary Table [3](#MOESM1){ref-type="media"}.
The entire spectroscopy system is enclosed and controlled via a LabVIEW software interface (National Instruments, Austin, TX). At each measurement point, the excitation source is switched through the two laser sources and the white light source, while the filter wheel is simultaneously changed to the appropriate position. After detection, each spectrum is corrected for dark background and wavelength-dependent system response. Fluorescence spectra then are divided by the white light spectrum in order to correct for the effects of optical properties^[@CR24],[@CR61]^. The corrected fluorescence spectra then are fit using singular value decomposition (SVD) in order to separate the individual fluorophores, as well as remove the effects of any native fluorophores that are present in the tissue, similar to prior use for separation of multiple fluorophores from detected fluorescence spectra^[@CR24]--[@CR26]^. Prior to scanning, the fiber optic excitation and emission probe is placed \~1.5 cm above the skin overlying the muscle to be analyzed (Fig. [4b](#Fig4){ref-type="fig"}).
Sample size {#Sec28}
-----------
The response to ASO treatment in the TR;HSA^LR^ mice as measured by in vivo imaging or spectroscopy was unknown. Therefore, we were unable to choose a sample size ahead of time to ensure adequate power to measure pharmacodynamic activity. Instead we estimated sample sizes based on RT-PCR splicing patterns of TR-ATP2A1 exon 22 and in vivo spectroscopy measurements of DsRed/GFP values in gastrocnemius muscles of TR single transgenic and TR;HSA^LR^ bi-transgenic mice. Using RT-PCR, the difference between means of these two groups was 31 and the standard deviations were 1.6 and 1.7, respectively, meaning that a sample size of 1 from each group would provide 98% power to detect a difference with *P* \< 0.05. Using spectroscopy measurements, the difference between means of these two groups was 4.2 and the standard deviations 0.02 and 1.0, respectively, indicating that a sample size of 2 from each group (*N* = 4 total) would provide 98% power to detect a difference with *P* \< 0.05. Mice ranged from 4 weeks to 4 months of age and were chosen randomly by genotype and stratified for sex to allow an approximately equal number of females and males. Although one or two examiners were blinded to treatment assignments for the subcutaneous ASO studies, the imaging and/or spectroscopy data obtained during the several week course of each experiment were so robust that it essentially identified the saline or low-dose treatment groups from the higher dose treatment groups prior to termination of the experiments.
Statistical analysis {#Sec29}
--------------------
For two-group and multi-group comparisons, we used unpaired two-tailed *t*-test or analysis of variance (ANOVA), respectively (Prism software, GraphPad, Inc.). To determine associations of DsRed/GFP values measured by microscopy with those measured by spectroscopy, we used Pearson correlation coefficients. A *P* value \<0.05 was considered significant.
Electronic supplementary material
=================================
{#Sec30}
Supplementary Information Peer Review File
**Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
=================================
**Supplementary Information** accompanies this paper at 10.1038/s41467-018-07517-y.
We thank the MGH Martinos Center for Biomedical Imaging for providing access to the IVIS Spectrum, and Drs. T. Cooper, C. Thornton, and A. Burghes for providing DNA constructs. NIH R01NS088202 (T.M.W.) supported this work.
N.H., L.A., and T.M.W. performed experiments and analyzed data. S.M., T.H.F., and T.M.W. designed the study. T.M.B. designed and custom-built the in vivo fluorescence spectroscopy system. T.M.W. wrote the paper. N.H., L.A., T.M.B., S.M., C.F.B., F.R., T.H.F., and T.M.W. analyzed the data, discussed the results, and commented on the manuscript.
All relevant data are available from the authors.
Competing interests {#FPar1}
===================
C.F.B. and F.R. are employees of Ionis Pharmaceuticals. The remaining authors declare no competing interests.
|
What is Traditional Chinese Medicine (TCM)? How is it different from Western Medicine?
TCM has been used for thousands of years with tens of millions of successful cases being documented in history. It is one of the very few traditional medicine systems that continue to hold its own in the face of the western medical model. The power and effectiveness of Traditional Chinese Medicine is evidenced by its long history of continued success, as well as modern research studies.
It is fundamental to the nature of TCM to see the patient as a whole, not just by his or her condition. Unlike conventional medicine, which relies on surgery or drugs to cure, TCM believes in the innate ability of the body to protect, regulate and heal itself. Using Herbs (Herbal Medicine), Acupuncture, Moxibustion (Moxa), Cupping, TuiNa Massage etc. TCM aims at self-healing and prevention rather than reacting to and suppressing symptoms as they arise. Patients having TCM treatments are often amazed to find that not only does their primary complaints improve, but also many other secondary conditions too. Whereas patients who are being treated with modern western medicine, especially over an extended period of time often develop complications due to the side-effects from the medicine.
How does a Chinese medicine doctor make a diagnosis?
In addition to western medicine diagnosis, a Chinese doctor makes a Chinese medicine diagnosis in terms of Yin, Yang, Qi, Blood, and Organ Imbalance. The diagnosis is based on the Eight Principles that are composed of four pairs of opposite categories: Yin and Yang, Exterior and Interior, Cold and Heat, and Deficiency and Excess. The Eight Principles serve as a framework for the information gathered through inspection, inquiring, listening, physical examination, tongue examination and pulse diagnosis.
Tongue and pulse diagnosis are two of the most important tools used in traditional Chinese medicine diagnosis. The tongue is considered a sensitive indicator of a person’s health. Subtle changes in its color, texture, shape, movement and coating indicate specific body imbalances and tell the progress of the illness. The character of a person’s pulses, like the tongue, also indicates the type of energetic disharmony or disorder. By feeling pulses on each wrist along the radial artery, a well-trained Chinese Medicine doctor can detect underlying imbalances of internal organs and the body as a whole. Once the pattern of disharmony/imbalance is confirmed and the Chinese diagnosis is made, the doctor makes a treatment plan for the particular patient to help him or her to restore good health.
Can the treatment be used in conjunction with conventional western medicine?
Conventional Western Medicine and traditional Chinese Medicine can be combined and they often enhance each other’s effects. Acupuncture, Moxibustion, Cupping and TuiNa are compatible with virtually all modern medical techniques. Chinese herbs (herbal medicine) can greatly enhance the benefits of conventional western medications, may help people live longer and reduce side effects caused by western medicine. At times, it may also be possible to eliminate the need for some of the western medications. It is important to be seen by a fully qualified Traditional Chinese Medicine practitioner for safe and effective treatments. |
A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world
Ivan Sutherland in Digital Visual Effects in Cinema by Stephen Prince
I remember looking at science fiction (sci-fi) films and finding not many of them to be too riveting or enveloping. The ones that did seem to grip me, I’d say, were Neil Bloomkamp’s “District 9” and James Cameron’s “Avatar”, both released in 2009. The former had done a brilliant job with its social commentary, and found a very interesting way to provide an informative yet completely stunning allegory for South Africa’s apartheid era. The latter takes the human mind on a mystical excursion to places far beyond both its reach and its understanding. It is execution displayed by both directors that truly solidifies what I believe to be a riveting sci-fi film. I had almost thought I would not be moved by impressive computer-generated imagery (CGI) graphics and a carefully crafted narrative again… and then I saw “Interstellar”.
With a plot and such hypnotizing visual effects, “Interstellar” (2009) had me convinced of all its intricacies to detail. I will proclaim my lack of knowledge about scientific exploration, transcending time and space, space travel, and almost anything that requires complex mathematical strategy to figure out. Still, director Christopher Nolan somehow made it all work. Space and time are both concepts yet to be fully understood by humankind. The beautiful thing about “Interstellar” is its exhibition of the black hole, a scientific phenomenon described as a region of space having a gravitational field so intense that no matter or radiation can escape. Nolan took the film and pumped all of his faith in the expansion of the human mind and human capabilities into it.
I theorized why I think “Interstellar” was so wildly amazing and I came up with what I call “The Black Hole Effect”; the film itself pushes its surrealism, its uncanny and dreamlike story, so skillfully out toward its viewers that they become absorbed with trying to answer all of the questions that the film raises. How is it that a human being can maneuver between dimensions and save the world? What mistakes and successes led him back home? Can love truly transcend space and time? Price writes, “digital images take viewers through the looking glass into new landscapes of vision unavailable to ordinary sense, and enable them to peer into domains of the imagination. In the process, they have given filmmakers new methods for extending the aesthetics of cinema,” (Prince, 55).
It is pure genius to toy with imagination and create an alternate reality that leaves your audience wondering about their own realities. The stunning visual effects of “Interstellar” dramatizes the dangers of space travel, indeed, but we do not have an event like this in real life to compare it too. It pushes man over the edge and shows how many ways he can be hit on the way down and still make it possible to survive the fall. |
Many tasks expose human workers to adverse, or hostile, environmental conditions. Fighting fires, repairing underwater structures, reconnoitering an area, exploring planets, rescuing hostages and stranded people, and attacking enemy positions expose the personnel involved to a variety of risks to life and limb. For some time now robots have been employed to accomplish portions of these tasks. Some robots are simply equipped with wheels or tracks and can only operate on flat, unobstructed surfaces. Even the presence of small obstacles, ledges, steps, ravines and the like disable these simple devises. More complex wheeled robots, like the Mars rovers of recent years, have their wheels mounted on arms that allow the wheels some freedom to move vertically. Yet these robots may still become hung up on larger obstacles. Biologically inspired bipedal robots have also been introduced in an effort to overcome obstacles. These bipedal robots include complex leg mechanisms that mimic the walking motion of a human being. The software required to operate these mechanisms is quite complex and can do nothing to save the robot should it be toppled.
Also, because many robots use air-breathing engines for power they cannot be used in space or underwater. The use of environmental air for combustion also poses problems if the robot should enter an environment contaminated by airborne chemicals (e.g. pollution, poisoning, smoke, etc.) Many of these substances can clog air filters or attack the hot interior surfaces of the engine and thereby precipitate failure of the engine. Further, many currently available robots use their onboard power supplies inefficiently thereby precluding long missions. The inefficiency is in part due to the need for the robot to run the engine even while loitering to have power available in case a disturbance attempts to topple the robot.
Despite the problems with currently available robots, the need for robots has often been cited by many private and public organizations. Therefore a need exists for efficient robots capable of navigating around obstacles and escaping from traps. |
HPC Wales Now Able to Provide Businesses with Advanced Analytics Tool
June 16 — The UK’s largest distributed supercomputing network has launched a new advanced analytics solution to help businesses capitalise on the growth of Big Data.
With the installation of Apache Hadoop, a framework for storage and large scale processing of data sets on clusters, High Performance Computing (HPC) Wales is now able to provide businesses of all sizes with a powerful analytics tool to help them interpret large and complex datasets, rapidly turning large volumes of structured and unstructured data into meaningful information.
It was estimated by the International Data Corporation (IDC) that 2.7 billion terabytes of data were created worldwide in 2012, a figure that is growing rapidly year on year.
With Big Data ranging across sectors from a few dozen terabytes to multiple petabytes – which are thousands of terabytes – and beyond, businesses and academics can now take advantage of this wealth of data with HPC Wales’ powerful analytics tool.
The Big Data solution can be used for anything from predictive analytics, social media analytics and text analytics to disease detection, prevention and treatment; financial modeling and smart energy metering.
With 17,000 computer cores and a peak processing performance of almost 320 TFlops, HPC Wales’ supercomputing network is capable of running 320 trillion operations per second. This processing power combined with access to sophisticated software packages, like Hadoop and R analytics, mean businesses in Wales can now harness the potential of supercomputing for the analysis of Big Data.
Part-funded by the European Regional Development Fund through the Welsh Government, HPC Wales is committed to boosting the Welsh economy by providing businesses and academic researchers with some of the most advanced computing technology in the world often at fully discounted prices.
Professor Sian Hope, Interim CEO of HPC Wales, said, “In just one minute of online activity, there are two million Google searches, 685,000 Facebook updates and over 200 million sent emails.”
“Companies such as Amazon and Tesco have been harnessing Big Data for commercial gain for some time now. These firms gather tonnes of data on customers, from what they’ve purchased to what websites they visit, where they live, when they’ve contacted customer services, and if they interact with their brands on social media.”
“With the launch of our new Big Data solution, businesses now have access to state-of-the-art technology to help them interpret this sort of data, providing valuable insights, helping them to boost their competitiveness in global markets.”
A company in Wales already benefiting from HPC Wales’ Big Data solution is Caerphilly-based Butterfly Projects, which has previously provided Big Data and predictive analytics services to the likes of Lloyds Banking Group and Zurich Insurance.
Sara Boltman, Founding Director at Butterfly Projects, said, “We are using Hadoop on HPC Wales for big data analysis and R for predictive analytics. Before accessing HPC Wales, we were restricted to processing up to 120 million records and our statistical model building machines would have to run for at least 12 hours overnight to complete this processing. Working with big data on HPC Wales now means that we can process terabyte scale data and gain a massive time saving against what could be achieved locally!”
“With access to the HPC Wales system, we no longer need to limit ourselves by size of data and can increase the complexity and volume of our workload, competing for larger and more competitive contracts.” |
<?php
/**
* Creates an account and grants it rights.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
* http://www.gnu.org/copyleft/gpl.html
*
* @file
* @ingroup Maintenance
* @author Rob Church <robchur@gmail.com>
* @author Pablo Castellano <pablo@anche.no>
*/
require_once __DIR__ . '/Maintenance.php';
use MediaWiki\MediaWikiServices;
/**
* Maintenance script to create an account and grant it rights.
*
* @ingroup Maintenance
*/
class CreateAndPromote extends Maintenance {
private static $permitRoles = [ 'sysop', 'bureaucrat', 'interface-admin', 'bot' ];
public function __construct() {
parent::__construct();
$this->addDescription( 'Create a new user account and/or grant it additional rights' );
$this->addOption(
'force',
'If acccount exists already, just grant it rights or change password.'
);
foreach ( self::$permitRoles as $role ) {
$this->addOption( $role, "Add the account to the {$role} group" );
}
$this->addOption(
'custom-groups',
'Comma-separated list of groups to add the user to',
false,
true
);
$this->addArg( "username", "Username of new user" );
$this->addArg( "password", "Password to set", false );
}
public function execute() {
$username = $this->getArg( 0 );
$password = $this->getArg( 1 );
$force = $this->hasOption( 'force' );
$inGroups = [];
$user = User::newFromName( $username );
if ( !is_object( $user ) ) {
$this->fatalError( "invalid username." );
}
$exists = ( $user->idForName() !== 0 );
if ( $exists && !$force ) {
$this->fatalError( "Account exists. Perhaps you want the --force option?" );
} elseif ( !$exists && !$password ) {
$this->error( "Argument <password> required!" );
$this->maybeHelp( true );
} elseif ( $exists ) {
$inGroups = $user->getGroups();
}
$groups = array_filter( self::$permitRoles, [ $this, 'hasOption' ] );
if ( $this->hasOption( 'custom-groups' ) ) {
$allGroups = array_flip( User::getAllGroups() );
$customGroupsText = $this->getOption( 'custom-groups' );
if ( $customGroupsText !== '' ) {
$customGroups = explode( ',', $customGroupsText );
foreach ( $customGroups as $customGroup ) {
if ( isset( $allGroups[$customGroup] ) ) {
$groups[] = trim( $customGroup );
} else {
$this->output( "$customGroup is not a valid group, ignoring!\n" );
}
}
}
}
$promotions = array_diff(
$groups,
$inGroups
);
if ( $exists && !$password && count( $promotions ) === 0 ) {
$this->output( "Account exists and nothing to do.\n" );
return;
} elseif ( count( $promotions ) !== 0 ) {
$dbDomain = WikiMap::getCurrentWikiDbDomain()->getId();
$promoText = "User:{$username} into " . implode( ', ', $promotions ) . "...\n";
if ( $exists ) {
$this->output( "$dbDomain: Promoting $promoText" );
} else {
$this->output( "$dbDomain: Creating and promoting $promoText" );
}
}
if ( !$exists ) {
// Create the user via AuthManager as there may be various side
// effects that are performed by the configured AuthManager chain.
$status = MediaWikiServices::getInstance()->getAuthManager()->autoCreateUser(
$user,
MediaWiki\Auth\AuthManager::AUTOCREATE_SOURCE_MAINT,
false
);
if ( !$status->isGood() ) {
$this->fatalError( $status->getMessage( false, false, 'en' )->text() );
}
}
if ( $password ) {
# Try to set the password
try {
$status = $user->changeAuthenticationData( [
'username' => $user->getName(),
'password' => $password,
'retype' => $password,
] );
if ( !$status->isGood() ) {
throw new PasswordError( $status->getMessage( false, false, 'en' )->text() );
}
if ( $exists ) {
$this->output( "Password set.\n" );
$user->saveSettings();
}
} catch ( PasswordError $pwe ) {
$this->fatalError( $pwe->getText() );
}
}
# Promote user
array_map( [ $user, 'addGroup' ], $promotions );
if ( !$exists ) {
# Increment site_stats.ss_users
$ssu = SiteStatsUpdate::factory( [ 'users' => 1 ] );
$ssu->doUpdate();
}
$this->output( "done.\n" );
}
}
$maintClass = CreateAndPromote::class;
require_once RUN_MAINTENANCE_IF_MAIN;
|
Product Images
Share this product
Size Guide
Promotions
Delivery & Returns
Seller Information
Product details
Leading Home Appliance
Emphasizing a globally advanced approach in creating affordable products for home use, Scanfrost products are designed to serve you better.
Cold As Ice in No Time
The Scanfrost Refrigerator SFR50 has a streamlined design, a zinc-coated and anti-rust body. It also has low noise operation and a quick cooling system that keeps food preserved for a long period of time.
Specifications:
- Total Volume: 50
- Energy Efficiency Rating: 3-STAR
- Number of door: 1
- Refrigerator Capacity(Ltr): 35
- Refrigeration Type: Static /No Frost / Dynamic NO FROST
- Number of shelves: 1
- Shelves material: PVC coated iron
- Freezer Capacity(Ltr): 15
- Frost free(Y/N): NO
- Freezer autonomy: 8
- Number of shelves: NA
- Shelves material: PVC coated iron
Buy the Scanfrost Refrigerator SFRDC050 on Jumia at the best price in Nigeria.
About Scanfrost
Scanfrost products have been a welcome presence in Nigerian homes for over 30 years. Known for superior quality, affordable prices and great after-sales services, the brand previously identified with freezers and display coolers, extended its portfolio in 2007 to a range of household appliances like cookers, microwave ovens, refrigerators, air conditioners, washing machines etc. Today the Scanfrost brand is recognised as a premium home making solutions leader.
Get Scanfrost appliances at the best prices on Jumia Nigeria and enjoy secure and convenient online shopping, nationwide delivery and guaranteed quality.
Specifications
Key Features
- 65 Litre capacity single door table top refrigerator
- Adjustable legs
- Separate chiller compartment
- Energy saving
- Single door
What’s in the box
Specifications
- SKU: SC034HL13XS0ONAFAMZ
- Area of Use: Dining Room|Bedroom|Kitchen|Home Office|Kids Room|Nursery
- Style: Modern
- Color: Silver
- Main Material: steel
- Model: SFRDC050
- Production Country: China
- Product Line: Seven Seven and Eight Resources
- Size (L x W x H cm): 52 x 46 x 50
- Weight (kg): 10
Customer FeedbackSee All
Product Ratings (1)
1 rating |
***CLICK FOR UPDATE: Lamb of God provides update following singer’s arrest***
***Lamb of God singer facing manslaughter charge has court hearing in Prague***
RICHMOND, Va. (WTVR) – The singer for Lamb of God, the world-famous, Richmond-based heavy metal band, was arrested this week in Prague in the Czech Republic in connection with the death of a fan two years ago.
Randy Blythe, 41, was detained when Lamb of God went to Prague for their show Thursday night at the Rock Café. He is being investigated for manslaughter for the fatal injury that occurred at their show May 24, 2010 at another venue in Prague, according to Czech news report.
Reportedly, Blythe struck or fought with the fan who had come on stage, and that person later died. [Read the band's response]
The four-time Grammy-nominated band cancelled Thursday’s show in Prague. There’s no mention on the band’s social sites about the status of Friday’s show in Leipzig, Germany.
Lamb Of God’s publicist, Adrenaline PR, stated to a music publication that Randy was “wrongly accused, lawyers are dealing with it, and we expect him to be fully exonerated.”
Randy’s brother, Mark Blythe, told CBS-6 that he’s awaiting further details and said the charge is “bogus and outrageous and will be dropped immediately.”
Lamb of God has been a top metal band for well over a decade, frequently appearing on the covers of heavy music magazines.
Photo Gallery
L.O.G. is one of Richmond’s best-known musical exports.
.” |
Sarah Nurse, right, of Canada, is congratulated by teammate Laura Stacey after scoring a goal against the United States during the second period of a preliminary round during a women's hockey game at the 2018 Winter Olympics in Gangneung, South Korea, Thursday, Feb. 15, 2018. Canada won 2-1. (AP Photo/Julio Cortez)
GANGNEUNG, South Korea (AP) Meghan Agosta and Sara Nurse each scored in the second period and defending Olympic champion Canada clinched the top spot in pool play by edging the United States 2-1 on Thursday in an early showdown between the dominant powers in women’s hockey.
Genevieve Lacasse made 44 saves, including stopping Hilary Knight at the post inside the final 90 seconds. Brianne Decker hit two posts, the second time coming in the final seconds, before the two rivals ended up in a scrum. Officials reviewed the final play and ruled no goal. The Canadians also had two goals disallowed.
Kendall Coyne scored the lone goal for the Americans.
Canada and the United States are the only countries to ever win gold at the Olympics. The Americans won gold in 1998 when women’s hockey joined the Olympics, while Canada is here looking for a fifth straight gold medal for the country that created the sport.
They played eight times last fall through a pre-Olympic exhibition tour and the Four Nations Cup. The United States won two of the first three, but Canada now has won five straight against their biggest – and only – rival in the sport.
The United States certainly had plenty of chances, including Knight being stopped on a breakaway.
After missing on a penalty shot and hitting a post late in the second, the Americans got on the board when Coyne raced through four Canadians and scored 23 seconds into the third period.
Canada thought briefly it had the first goal of the game with 3:15 left in the first period, but Melodie Daoust and captain Marie-Philip Poulin were in the crease with the play blown dead. The official immediately signaled no goal.
Agosta put Canada up 1-0 at 7:18 of the second on the power play. With Megan Keller in the box for interfering with Poulin, Natalie Spooner in her 100th international game spun and hit Agosta in the slot with a backhanded pass. Agosta’s shot went off Rooney’s glove and in for the goal.
Nurse scored at 14:56 with a shot from the left circle that went off Rooney’s elbow. Laura Stacey appeared to be offside as Canada brought the puck into the zone, but the United States did not challenge.
Officials awarded Jocelyne Lamoureux-Davidson a penalty shot at 16:08 of the second after Canadian forward Haley Irwin placed a glove on top of the puck in the crease amid a pile of bodies in the crease. Lamoureux-Davidson, who scored the fastest back-to-back goals in Olympic history in the U.S. win over Russia, went too slow and got the puck caught near her right foot before a backhand Lacasse easily deflected.
U.S. coach Robb Stauber started Maddie Rooney, his youngest goalie with all three of the U.S. wins against Canada. |
Home | What . Who . Plans . Tools
Richard K. Herz is the author of the Lab. He is a professor in the Department of NanoEngineering at the University of California, San Diego - UCSD. His students used the first module in 1993, and he has been developing the Lab ever since. -- email: herz@ucsd.edu -- home page at UCSD (opens in new window)
Ivan Gagné is a junior chemical engineering major at UCSD. He joined the project in May 2002 and his assignment is to bring some graphic excitement to the project through 3D reactor models and animation. He is also developing a simulation of a fuel cell.
Diego Trujillo graduated from UCSD in December 2003. He joined the project in August 2002 and helped develop a simulation of an adiabatic accelerating rate calorimeter.
We also have volunteers who have supplied translations of the text into other languages and have helped in the development of lab modules. UCSD chemical engineering students have used the Lab in their courses and have provided valuable feedback.
YOU can help in many ways by sending us: comments via email, photos and images of reactors, suggestions for new labs (variable definitions and model equations even), ...
The National Science Foundation is helping to support development of the Lab through award DUE-0125076, starting in March 2002 through February 2005. Our sponsor is NSF's Division of Undergraduate Education - Course, Curriculum and Laboratory Improvement Program DUE - CCLI.
Home | What . Who . Plans . Tools |
Q:
UITableViewCell button change image
In my table cell I am having button like this
cell.btnAddFriend.tag = indexPath.row;
[cell.btnAddFriend addTarget:self action:@selector(addFollow:) forControlEvents:UIControlEventTouchUpInside];
When I click the button it has to call the function addFollow: function, this is working fine.
In the the addFollow function, based on the condition I have to change the image of cell.btnAddFriend
For that I am using this code
[cell.btnAddFriend setImage:[UIImage imageNamed:@"icon_friend.png"]forState:UIControlStateNormal];
If there is one line in the table, this is working fine, if there are more than one line in the table, this is not working because I am not passing the value in which indexpath the button image has to be changed.
Can anyone help me how to set the table cell value through a function.
A:
You can use the following - you actually get your button in addFollow
- (void) addFollow:(UIButton *)sender //this is, actually, button you tapper, so you don't worry which cell it is
{
[sender setImage:[UIImage imageNamed:@"icon_friend.png"]forState:UIControlStateNormal];
}
However, you should consider the following - if in tableView:cellForRowAtIndextPath: you reuse cells, than you may see, that you have those images set on buttons in cells, you actually didn't tapped. This is possible, if not all your cells fits the screen (you can scroll them out of visibility). In that case, you should keep state, whether button tapped or not.
One way to do it. Declare
@property NSMutableSet *buttonTagsTapped;
addFollow will change
- (void) addFollow:(UIButton *)sender //this is, actually, button you tapped, so you don't worry which cell it is
{
[sender setImage:[UIImage imageNamed:@"icon_friend.png"]forState:UIControlStateNormal];
[self.buttonTagsTapped addObject:[NSNumber numberWithInt:sender.tag]];
}
And where you create cell
cell.btnAddFriend.tag = indexPath.row;
if([self.buttonTagsTapped containsObject:[NSNumber numberWithInt:sender.tag]]) {
[cell.btnAddFriend setImage:[UIImage imageNamed:@"icon_friend.png"]forState:UIControlStateNormal];
}
|
The Continued Flow of Babies is the Nature in Motion.
Without nature we would not live and remain on this earth, everything must remain in balance for give the best to us.
Many ignore the nature and find other things more important than taking into account our nature and destroy without thinking about it, that’s it their own existence.
Mother Nature in all its landscaping trees and plants that ensure new oxygen is in the air with scents and colors and were can grow food on.
Some people forget just how important it is outdoors and that the existence of which is to keep everything in balance, reckless deal with it can have harmful effects.
In the long run it may be almost inevitable that it not should go wrong with the pace of nature become redeemed for construction projects or highways.
Forests and jungles are cleared for profitable projects without a plan where all the animals that life there have to go or a chance to get a further life.
Companies are getting bigger and more elaborate designs so there will not be any more land left to us to provide supplies of food for the population.
So our relatives will soon no longer get natural but imitation food something that seems to taste the same and is that our future?
The world population grow every day there are still more people born than die by the proper health and medical facilities.
But that will also change in the future as we so keep going with no respect having for Mother of Nature and not cherished it.
The experience of nature in this way will not bring much good in the world where more and more people come and everyone must eaten, where does it come from?
But if we continue to destroy nature and build with bad exhaust of factories which there are more pollutants in the soil and air after it will be no longer possible eating natural nutrients through a feeding ground with toxic substances.
The continued flow of babies is the nature in motion, but where will soon for all our bereaved obtain the necessary food comes from when nothing more can be rebuilt?
All the best for You with a Good Health
©author Jan Jansen @authorjanjansen @janjansenpoetry |
Multi-generational Family
My mother-in-law who lives with me always scolds my three-year-old son for being noisy. I am worried that this behavior may affect him. Yet, my husband told me to bear with it. What should I do?
You can consider arranging a full-time school for your son, or take your son outside after school or lunch break to relax, such as playground or library. You can try to give your mother-in-law and yourself a little more space.
There are many facilities in the community that can provide you with assistance such as mutual help child care centres and community babysitters.
Of course, you can also go to the integrated family service centre in the district to talk to a social worker, or participate in our multi-generational family education and support programme to learn the skills of getting along with mother-in-law and daughter-in-law. You can also try to talk to your husband based on your child’s needs. For example: “It’s normal for a child to play and be noisy, but your mother-in-law often complains that he is too noisy. To let the child grow up healthily, it’s better to talk to your mother-in-law. For example, the child can play at home at a specific time each day, or he/she can only be noisy for 30 minutes at a time, etc." To make the husband understand that it is natural for the child to be noisy, and let him communicate with your mother-in-law.
Source: Ms. Lok, Social Work Consultant, Hong Kong Family Welfare Society |
Technology has never been so pervasive in an individual’s life. Though historically we have trusted human interaction over automated computer processes, recently, we’ve become more accustomed to the influence of technology on our daily routines. Therefore, it is changing how humans think, communicate, and work.
New tools allow us to do more in less time and communicate ideas instantly and effortlessly. They provide the freedom to focus on more profound problems.
Tech Changes Human Cognition
Mobile technologists are at the forefront of this movement. They’re actively designing and developing new technologies that shape the future, and they are excited about the possibilities it has to offer. In our 2017 Technology and the Human Condition survey, we’ve discovered that experts, more than mainstream consumers, understand the benefits of technology. This mentality comes with many changes to the way we view its role in our lives.
Product teams envision a future where the ubiquitous presence of computers is as common with non-technical consumers as it is with technologists. We surmise that technologists’ immersion in a culture of innovation has led them to have a more forward-thinking perspective. This more radical outlook has lead to a greater trust in technology.
“Technology removes the most menial and unfulfilling tasks for humans. Now we have time and energy for more abstract thinking. Technology has eliminated our need to continuously fight for survival.” – MARCUS SMITH, SOFTWARE ENGINEER, STABLE|KERNEL
The Effects of Artificial Intelligence
Technology is taking over more routine, repetitive tasks. As a result, humans are trusting technology to make our lives simpler as AI is creeping into our homes, transportation and our smartphones. Driverless cars are on the road in many of our cities. Though, just a few years ago, we wouldn’t have entertained the idea that we could hail a private car through an app with the swipe of a finger.
Fifty-nine percent of our survey respondents say they would ride in a driverless Uber car today.
Now it is possible to travel without having to interact with a human at all. This kind of disruption has transformed our concept of what we consider to be normal and safe.
But, are mainstream Americans ready to embrace direct communication with a computer? AI enthusiasts seem to think so. Research firm Tractica estimates by 2021, more than 40 million homes will have a home assistant. Amazon’s industry-leading Alexa is now a household name; they’ve sold more than five million units since 2015.
Google jumped into the smart assistant space in Nov. 2016 with the launch of the Google Home. Though Google and Amazon are not the only smart assistant products on the market, they are expected to sell more than 24 million combined units in 2017.
While Amazon currently leads the smart assistant category, 50 percent of our survey respondents believe Google will eventually outpace Amazon. These same respondents also believe Google will be the most innovative tech giant in 20 years.
Related: A Deep Dive Into Machine Translation
The Internet of Things
Personal home assistants are one of the newest connected home devices to join the Internet of Things. Gartner research suggests more than 6.4 billion devices will connect to the Internet in 2016. This number is expected to reach anywhere from 25 to 50 billion by 2020. We asked those surveyed what they thought about the future of IoT, and the majority believe IoT will achieve widespread adoption quickly. In fact, 72 percent of respondents agree or somewhat agree that all electronics will have an IP address by 2050.
Related: Top 10 IoT Technologies Every Business Needs to Know
Technology merely enables humanity to be more productive. It is a tool meant to enhance our abilities to create solutions to the problems we face as a society. Ultimately, human need and desire will drive innovation. |
This is the story of Ashely Marie Thomas (year 2075)
The air tasted bitter and she felt the burning in her chest as her heart pounded and echoed with the beat of her footfalls. The world passed by her in a blur through the warm tears that flowed down her cheeks. She felt herself bump into someone but she didn’t pause to look back, she had to get away from all the pain that clawed its way to the surface. How dare her parents betray her this way. Why her? She repeated over and over in her head.
“I hate them both,“ she cried as she rounded the corner and realized that she had come to the shipyards.
Clenched hands trembled at her side in pure hatred and anger before she found a small corner next to some crates. It was there that she buried her face and let the pain take over wracking her in sobs. The image of the hidden email, that she had intercepted, tormented her.
08252074888
Aztechnology
Tenochtiltlán, Aztlan
Re: Project Ozma 3082061
This letter constitutes notice under section 31(b) of the Aztechnology Ozma project to inform you that you have reached the end of your maintenance agreement with Aztechnology and that we will no longer be able to provide support or maintenance to you in any form with project Ozma. It has been brought our attention that the subject in project Ozma has been outside of set boundaries and restrictions placed for controls. Due to the contamination of the project, control is no longer viable. All access to company services, personnel or equipment as per our service agreement will be suspended.
The termination of project Ozma will occur on November 27, 2074 at 0900. All of the property of Aztechnology and associated data will be returned at the above time and date for termination.
After what seemed like an eternity, she took a deep breath letting disbelief and anger slowly fade. Now everything was beginning to make sense to her. She was the project called Ozma and she had been nothing more than a lab rat. Even her own parents, or whom she thought were her parents, had been involved. All the dots were starting to connect in her head. The distancing of her parents, her isolated life, mandatory matrix schooling and the refusal to let her do something outside of the pyramid made more sense now. She always knew she was special since she could do things that others seemed to require technology to do. Her whole life as she knew it had been shattered into a thousand fragments when she read that communication. She had been living a life of a lie. Her final act of defiance was to leave her known life behind. She was not going to be someone’s lab rat.
Reaching up she pulled her on her jacket, squared her small pack with a gentle tug and turned to walk into the shipyard. It was here she was going to start her new life. She glanced back over her shoulder, making sure that no one had followed her, then whispered to the air “goodbye…”
Her sapphire gaze flickered over the expanse of cargo crates that lined the docks. Small electronic images blazed in a heads up display that was part of her gaze. It showed her the time, current temperature, and a small miniaturized map of the docks. She lazily let her mind wander for a moment allowing it to dance over the various lines of electronic jabber, until she found what she was looking for. Mentally she pushed with her thoughts and linked with the cargo crates manifest listings, which then added the contents of the cargo crates to her display. She located what she needed and her feet mapped a path into the depths of the docks warehouses to a small cargo crate.. It was empty and this is where she would stay until tomorrow.
One day turned into a another and soon a month had passed. The cargo crate had become more of a permanent home for her. Her life of easy meals and having a soft pillow to sleep on was a thing of the past. Now it was scrambling to find the next meal and try not to become a victim of violence or sexual crime. Her clothes had holes where they had worn out, baths became a thing of luxury. She had become a little bird lost in the forest of buildings and was one of the city’s many homeless vagrants. Her talents stayed quiet and unused for fear that the corporation would find her.
The wintery rain sleet mix was cold as it drizzled down her back and soaked her to the bone. The dirt washed from her skin and she shivered from the frigid kiss of the moisture and biting wind. She watched the building from the evening shadows while she mentally steeled herself to return to the place she had been running from for the past several months. Her fingers curled around her soaked curls and drew them back into a tight braid hoping to keep the ragged edges of her look to a minimum. She pulled her jacket snugly over her wet frame and moved toward the dreaded building swiftly.
The door to the Pyramid shopping center swung open as the well dressed business man exited and she darted in before it could close. She was hungry and she knew the food court would have something she could get that was warm. Walking up to an electronic station she pulled in and lowered her head attempting to hide her actions. She sat on the bench next to two other people that appeared to be jacked in currently to the net. A smile curled her lips briefly, as she knew they were too busy in their own augmented reality to pay any attention to her. Earlier she had obtained a small credstick and she knew she was going to have to use her abilities in order to load it with a few credits to eat. She hoped that it wasn’t what was called a screamer, which meant that she would have to run if it was. Her fingers moved to the console interface of her small sony emperor and she tried to make it look like she was actually using it.
Closing her eyes she could feel herself slip into the world of numbers drifting on the lines with her mind and melding with the wireless surroundings. She traveled the trail slowly and carefully, knowing that the Grid Overwatch Division may spot her in weave of code. After slow manipulation she was standing in a virtual world with an avatar that reflected where she she was currently sitting in the physical world. She would run her fingers over her strands causing their red hue to shift to a golden blonde and her body altered slightly enhancing herself in areas that was still developing, hoping that no one would recognize her in this virtual paradise. It was camouflage to the net that many referred to as static veiling and as long as she didn’t jump from grid to grid, she knew she was pretty hidden from the Grid Overwatch Division. It felt so good to be connected and she found herself feeling at ease.
Standing in the matrix she smiled and would move to find her way to get some swift cash for her meal today. A little bump here and a little bump there on certain money transfers and she would siphon off a minute amount of creds that would not be missed. What would seem like an hour in this augmented reality was only a few minutes in the real world. She snapped back along the edgy lines of code and slowly opened her eyes. Smirking, she rose from her seat and tossed the credstick lightly into the air catching it again in the palm of her hand. The credstick now had some credits on it, which was going to be enough to get something to eat with and to shop for a few things needed to make her life more comfortable in her makeshift container home. Taking the stick, she made her way to the center of the food court to order something warm for the day.
“Ashley?” she heard her name called out curiously.
A moment of pure panic welled up within her and she pivoted swiftly prepared to bolt and run. She knew that this was a risky thing to do, but the lure of a hot meal had been too tempting. It had been over a month since she had left home, but knew that no one would suspect her being back under their very noses to get something.
The voice was from Emma. The one person that she had regretted losing contact with when she ran. She had been afraid to reach out and contact her all this time. She knew the corporation would be watching and monitoring, hoping for that one little nibble to lead them to her. She turned to face her she and lifted a finger up to her lips to indicate to Emma to become quiet. Her head tilted in a silent gesture toward the doors hoping that she would take her up on the motion. Grabbing the bag of food that was placed on the counter, she moved swiftly through the crowd and out the door, hoping that Emma would follow.
“I don’t understand” Emma stated as she exited the building and rounded the corner following her.
“It’s a long story, but I promise you I am not crazy. I need you to promise me you won’t tell anyone about this. “ Ashley stated while squarely looking Emma in the eyes. “It could mean life or death for me.”
She waited to see what Emma would say.
“I promise, but surely it’s not that bad.” Emma shifted to stand underneath an eave to keep the cold drizzle off of her while they spoke.
Ashley smiled at the confirmation of the promise and then took a deep slow breath.
“I don’t want anyone to know where I am anymore. My parents, you see, they “ she paused briefly as she pondered on what part of the truth she would tell “They .. weren’t really my parents and things got ugly between us. I had to leave and I don’t want them to know where I am.. okay?” she finally finished her statement.
Of course that was not the full story. She had left out the fact that they probably were not even her real parents to begin with and planned to turn her over to the Aztechology’s science lab to be dissected. The more she thought on this the more ridiculous it sounded even to her. She knew that she needed to just keep that all under wraps. Better to let people think there was something else going on and to keep her secret. Emma nodded slowly and pushed back her own tousled strands
“Well at least we can go to my place. Perhaps you can spend the night and we can talk it out. I will ask my brother and see if you could. I sure it won’t be a problem.”
She then took notice of how rough looking Ashley was around the edges.
“But first, we have to get you something else to wear and go eat something besides that.” she pointed to the greasy burger bag in Ashley’s hand and laughed. “That stuff will kill you.”
With that, the both of them spent the afternoon shopping and just being teens. It was a nice change for Ashley and she longed to just be normal like this once again. |
100% Original Work. Don’t use plagiarized sources. Get Your Custom Essay on Punishment for prosecutors | Law homework help Just from $13/Page Order Essay Graduate Level Writing Required. DUE: Friday, July 3, 2020 by 5pm Eastern Standard Time. Background: The chances that you will face ethical dilemmas in the workplace is almost certain. This is true for defense attorneys and prosecutors, too. Because of this, it is important to practice proper discretion. How are you going to analyze and solve the ethical dilemmas you face? Read and reflect on the Ethical Dilemmas exploration activities each week to help you complete this activity. Write a 700- to 1,050-word paper describing the ethical issues for prosecutors and defense attorneys. Answer the following: What types of ethical violations and punishments have been associated with prosecutors and defense attorneys? What are the explanations for prosecutorial misconduct? Provide real-world examples of prosecutorial misconduct. The Jodi Arias case isa great example of a prosecutor stretching the limits of what is proper conduct and what is a clear violation. Should attorneys be punished more or less than the standard criminal defendants? Explain your answer. Include a minimum of 4 references from texts, articles, journals, local police or criminal justice policy, and websites; only 2 may be websites. Format your paper consistent with APA guidelines. Must Be Graduate Level Writing 100% Original Work
Overview: Place yourself into the following scenario. You will take on the role of union president and use the facts
Overview: Place. Don’t use plagiarized sources. Get Your Custom Essay on Module two short paper | Law homework help Just from $13/Page Order Essay.
Don’t use plagiarized sources. Get Your Custom Essay on Responding | Education homework help Just from $13/Page Order Essay Guided
Don’t use plagiarized sources. Get Your Custom Essay on Responding | Education homework help Just from $13/Page Order Essay. First respond: Drosopoulos, J. D., Heald, A. Z.,
In this brief written essay assignment you will meet a real person who has lived with PTSD by reading their
In this brief written essay assignment you will meet a real person who has lived with PTSD by reading their story or watching their video. Don’t use plagiarized sources. Get Your Custom Essay on One person’s personal story with ptsd Just from $13/Page Order Essay Utilizing reliable internet sources, locate a video or article in the form of a story published by a person who has experienced PTSD. A search of the Veterans Administration, other PTSD-related websites, and YouTube should help you locate a wide array of stories from PTSD survivors to select from. This individual can be at any stage in his or her experience with PTSD. S/he may still be experiencing PTSD, or may be in treatment currently, or may have come through to the other side of full or partial recovery. Here are the parameters for choosing your source. Your chosen story cannot be a part of assigned readings or videos for this course. You must include the full APA formatted citation for the website or journal/magazine where you located this article or video. (If an article, the article must be freely accessible for purposes of evaluation and grading.) YouTube and other reputable video websites are permissible for this assignment. You are responsible to choose a story that is factual and reputable. Once you have chosen your source, write a 2 page brief essay that addresses the following: In 1 paragraph, briefly describe the person profiled in this personal story (e.g. gender, age, background, and any other pertinent information that will put the person’s experience with PTSD into perspective). In 1-2 paragraphs, write a brief summary of this person’s experience with PTSD. This should not exceed one third of the overall body of your paper. Describe 3-5 connections you were able to make between our course materials (descriptions, diagnostic criteria, theoretical frameworks, treatment models, etc.) and this individual’s personal experience with PTSD. For example, were you able to identify classic symptoms of PTSD from what they described? Was the cause of their PTSD clearly identifiable? To what extent was any treatment described identifiable as an evidence-based best practice? Please describe at least 1 thing you might have done differently, or recommendations that you might make for this person to minimize long-term effects of exposure to trauma, based on what you have learned this term. Close your essay with 1-2 well thought-out final reflection paragraphs that share what you learned in reading this person’s story. Include 2-3 questions in your closing that you would ask this person if you had an opportunity to interview them and learn more about what they have been through. Reference: Posstraumatic and Acute Disorders, Matthew Friedman Sixth Ed The War at Home,One Family’t fight Against PTSD, Shawn Gourley
Don’t use plagiarized sources. Get Your Custom Essay on Easy assiment in 3 hours Just from $13/Page Order Essay Research
Writing Assignment Writing ServiceDon’t use plagiarized sources. Get Your Custom Essay on Easy assiment in 3 hours Just from $13/Page Order Essay Research Your Family Health History for Diabetes For this assignment, you will need to interview (orally-not through e-mail) at least three family members (can be parents, grandparents, great grandparents, aunts, and uncles) to see if diabetes is present in your family health history. When interviewing the family members, be sure to ask the following questions: 1. Have you ever been tested for diabetes? 2. Have you ever been diagnosed with diabetes? If so, what type of diabetes? 3. If they have been diagnosed with diabetes, what were the symptoms they experienced? How are they treated? What lifestyle changes have they made? 4. If they have not been diagnosed with diabetes, do they know what the symptoms of diabetes are? If they don’t know, be sure to share with them what you learned from in pp. 395-406. Do they know how they can reduce their risk for diabetes? If not, be sure to share the information from pp. 402-403 in your text book. This assignment is intended for your to learn from your family members, or for them to learn from you. When you have completed the interviews, write up (in complete sentences) your results from each of the three interviews and your reaction to the experience. Be sure to include the first name and ages of the individuals you interviewed. Use 12 point font, 1 inch margins and double space your 1 page paper. I want this easy one in three hours for 7$
politics in russia (late 80s to present) hwk : 10 1-page max answers
This HWK is due on Sunday night at 22:00 pm NYC time . Don’t use plagiarized sources. Get Your Custom Essay on politics in russia (late 80s to present) hwk : 10 1-page max answers Just from $13/Page Order Essay Each answer must be approx. 1 page long (12 Times New Roman Font) except when asked to answer briefly where only 1 paragraph is needed. 1st question includes my answer to give you a sense of what I am looking for. Writting must be your own
Case Study on Death and Dying The practice of health care providers at all levels brings you into contact with
Case Study on Death and Dying The practice of health care providers at all levels brings you into contact with people from a variety of faiths. This calls for knowledge and understanding of a diversity of faith expressions; for the purpose of this course, the focus will be on the Christian worldview. Don’t use plagiarized sources. Get Your Custom Essay on Case study on death
Due in 5 hours- i need a 1 page of literature review of the document
Please note that you are starting your literature review with the questions above, but you will need to expand the answers to these questions to meet thecritical elements of the literature review for your final submission. The following critical elements must be addressed as also outlined in the literature reviewsection of the Final Project Part I Guidelines and Rubric document:I. Literature Review: In this part of the assessment, you will analyze foundational research presented in the course for how the field of social psychologyhas changed over time, how researchers have designed research to study social psychology, and how issues of ethics have been addressed historically inthe field.A. Summarize the claims made by the authors of the foundational research presented in the course regarding how social context factors influencehuman behavior. In other words, what claims are made by the research about social context factors and human behavior?B. Summarize the claims made by the authors of the foundational research presented in the course regarding how social motives influence humanbehavior. In other words, what claims are made by classic and current research about social motives and human behavior?C. Explain how the view of social context factors and social motives has evolved over the history of social psychology. Be sure to support your analysiswith examples from research to support your claim.D. Explain the conclusions you can reach about research in social psychology. In other words, explain what we know about social context factors andsocial motives, based on your review of the research presented in the course. Be sure to support your analysis with examples from research tosupportresearchers help in conducting research regarding social psychology?G. Discuss how issues of ethics have been addressed in the foundational research presented in the course. For example, how did the authors informthe participants of what the experiment would entail? How did the authors account for any potential risks to participants associated with the study?H. Discuss how issues of ethics in social psychology have been viewed historically. In other words, how have issues of ethics in the field been viewedover time? Has this view changed as the field has progressed? Be sure to support your response with examples from research to support your claims. Don’t use plagiarized sources. Get Your Custom Essay on Due in 5 hours- i need a 1 page of literature review of the document Just from $13/Page Order Essay
Qualitative and quatitive analysis of two stocks in the same industry
The requirements of this project is that the paper be between 8-10 pages, to include a Title and Reference page, References must includes in APA format and cited within the document. Don’t use plagiarized sources. Get Your Custom Essay on Qualitative and quatitive analysis of two stocks in the same industry Just from $13/Page Order Essay Do a quantitative and qualitative analysis of Smith
Recall your chosen firm and industry you have been using throughout the course. For this assignment, you will identify the
Recall your chosen firm and industry you have been using throughout the course. For this assignment, you will identify the top three major safety and health issues in your firm, and write a policy on each, consistent with Occupational Safety
Don’t use plagiarized sources. Get Your Custom Essay on Sb learning | Education homework help Just from $13/Page Order Essay
Don’t use plagiarized sources. Get Your Custom Essay on Sb learning | Education homework help Just from $13/Page Order Essay Teachers |
Kyo adventure, and who want to do something a little different for their wedding.
Your wedding is one of the most memorable days of your life. My goal is to capture exactly how it felt. I do my very best to put you at ease, stay unobtrusively in the background, and make the whole process as fun and effortless as possible.
I’m excited to meet you!
Other couples have also viewed
Interested in this vendor?Request pricing
29 Reviews for Kyo Morishima Photography
Recommended by 99% of couples
Hannah K. · Married on 05/07/2019
Amazing Wedding Photography
Kyo photographed our wedding at Nassau Inn in Princeton NJ this June. Our experience with Kyo and his team was one of the highlights of our wedding. We don’t usually enjoy posing and being photographed but Kyo made us feel at ease and gave us very good cues. He was very excited about all our ideas for photos and walked with us all around campus for photos at all the locations we wanted. Kyo knew where to go to get the best shots at the locations we chose and was very knowledgeable about the campus. During the wedding we got pictures with all of our guests as well as of all the decor elements that we had put together.Sent on 06/16/2019
The booking and planning process was also very easy and Janna was very helpful and responsive throughout. We highly recommend Kyo for any event!
Kyo Morishima Photography's reply:Hannah, thank you so much for this sweet and wonderful review! You and Mateo inspire us -- and we can't not mention your beautifully creative and meaningful table decor, the llamas! SO CUTE.
Rose · Married on 09/29/2018
Truly beautiful photographs and a friendly, smooth experience!
Kyo photographed our September wedding in Philadelphia and we are so incredibly happy with how the photographs turned out. Communication was easy throughout the process and we are so impressed with Kyo's eye for detail and his ability to capture moments that were important to us. The formal family portraits were framed in front of a tree that we all had seen many times and did not consider particularly special, but in a testament to Kyo's eye and talent, it looks truly fantastic in these images! We were able to take a lot of formal family portraits quickly, which was really important to us and our parents, and Kyo was a calming presence throughout. His photos of the ceremony, cocktail hour, and dance floor are all equally beautiful. Both of our families are thrilled to have such amazing images that reflect how special the day was and all of the thought everyone put into it. We can't recommend Kyo and Janna highly enough.Sent on 11/13/2018
Kyo Morishima Photography's reply:You made our heart sing.
Alexis · Married on 10/28/2017
This is a very belated review from October! Kyo and his team were consummate professionals from beginning to end. He and Janna responded incredibly quickly to emails and any questions we had.Sent on 03/17/2018
Additionally, and most importantly, Kyo is just a fantastic photographer. He captures moments in a way I haven’t seen any photographer manage. He has a beautiful eye and I truly could not have been happier with my photos.
Kyo was also just a great person to spend the day with. I am not super comfortable being photographed but his direction and suggestions were fantastic, and he’s very laid back and kind, which put me at ease instantly.
Lastly, the pricing is EXTREMELY reasonable for the very high level of quality Kyo offers. I do not think there is a better option for wedding photography in the Princeton/NJ area than Kyo!
Kyo Morishima Photography's reply:Wow, that’s one of the best things anyone has ever said about us. We were honored to work with you and can’t thank you enough. :)
5 Endorsements
We recently recommended Kyo Morishima Photography to our client and we are so happy that we did! Kyo and Janna were super easy to work with from beginning to end. As event planners, we always look for vendors who are professional, respond in a timely manner, and who are flexible with our clients' needs and budgets. Kyo offered just that! On the day of the event, Kyo remained calm and on task and was unobtrusive when taking photographs. He took the lead when necessary and easily managed a family portrait session during the party, which included more people than the norm (without taking too much time so guests wouldn't miss out on the party). Kyo had a quick turnaround time for the pictures and overall, our clients were thrilled with how beautifully he captured the day! We look forward to collaborating once again on future events.
Beautiful photography, kind and compassionate photographers who will make your special day all the more special. I met the husband and wife team at a cake event and experienced their quality photography first hand. Kyo Morishima Photography are wonderful!
Kyo Morishima is one of the nicest and most innovative photographers I have worked with since becoming a wedding officiant. The attention to detail and creative photos captures the essence of true love between the couples he photographs. While remaining almost invisible, Kyo was able to document every detail from the signing of the marriage license to the wonderful Fall landscape of the wedding on which we both worked. I wouldn't hesitate to recommend this photographer if you are in the market for someone to create very special memories of your big day! Betty Coram, Wedding Officiant New Jersey Beautiful Weddings
What photography services does Kyo Morishima Photography include in the starting price?
Kyo Morishima Photography includes the following photography services in the starting price:
What photography services does Kyo Morishima Photography include in their most popular wedding package?
Kyo Morishima Photography includes the following photography services in their most popular wedding package:
What photography styles does Kyo Morishima Photography offer?
Kyo Morishima Photography offers the following photography styles:
What photography services does Kyo Morishima Photography offer?
Kyo Morishima Photography offers the following photography services for weddings:
What photography materials does Kyo Morishima Photography offer for wedding events?
Kyo Morishima Photography offers the following photography materials for weddings:
What percentage of users recommend Kyo Morishima Photography and which are the most valued aspects of their wedding services? |
Summer Activities
Mansfield, Ohio
1-800-872-0222
Plan Ahead For Fun Summer Activities In Mansfield Ohio This Season!
When the summer season arrives, the search for summer activities in Mansfield OH is on. This is especially true if you have children. Once the kids are out of school boredom sets in quickly without planned Mansfield Ohio summer activities. With the whole internet at your fingertips you have unlimited ideas and resources for choosing your Mansfield OH Mansfield Ohio all season long.
A great place to start in your search for summer activities in Mansfield Mansfield Ohio summer activities to please all tastes. A great benefit of planning Mansfield OH summer activities through your local Parks and Recreation department are the cost benefits. Summer activities in Mansfield Ohio Mansfield OH go, nothing beats a cold ice cream cone on a hot sticky day. A trip to your local theater or bowling alley can be a great idea for summer fun in an air conditioned environment. Many theaters and bowling alleys offer special programs as fun Mansfield Ohio summer activities for kids. Search locally for all of the options available for Mansfield OH summer activities through local businesses. Sometimes you can even find coupons and discounts available for your fun.
Another great option when planning your kid's summer activities in Mansfield Ohio Mansfield OH. You will find summer camps that last for long periods of time away from home and also small day camps. You will also find summer camps that cater to many different interests.
As fun as all of these options are, nothing can beat Mansfield Ohio summer activities and fun in your own backyard. Old fashioned Mansfield OH summer activities like bubble blowing and water play are still popular favorites among today's kids. With a bit of creativity you can plan summer activities in Mansfield Ohio at home all summer long that won't break your bank. Adults can enjoy summer activities in Mansfield OH at home along with the kids. While the kids play in the yard you can bask in the yard and enjoy the slower pace of life that summer brings with it. Enjoy all of the summer fun and relaxation!
Summer Activities
Mansfield, Ohio
1-800-872-0222 |
Dubai, Le Royal Meridien Beach Resort And Spa nalazi se u blizini znamenitosti Prodajni centar Dubai Marina, Marina Dubai i Golf klub The Emirates. Dodatne regionalne znamenitosti su Aquaventure i Mall of the Emirates.
Značajke odmarališta.
Odmaralište ima privatnu plažu na kojoj se možete opustiti.
Odmaralište ima 8 restorana zajedno s 2 kafići/kafeterije.Bežični internet besplatan je u javnim prostorima. Odmaralište siralište za goste je besplatno.
Sobe za goste.
Sobe sadrže sljedeće: balkoni s pogledom na more .500 sobe/a s klimatizacijom u objektu Le Royal Meridien Beach Resort And Spa uključuju: CD playeri i minibarovi. Dostupna je besplatna usluga bežičnog i ožičenog pristupa internetu velike brzine. 42-in LCD televizori opremljeni su satelitskim kanalima i DVD playeri. Kupaonice nude: ručni tuševi, vage, ogrtači za kupanje i dizajnerske toaletne potrepštine..
Caracalla Spa im: sauna, spa kada i turska kupelj/hamam.Osiguran je niz tretmana u obliku terapija, kao što je/su aromaterapija.
Al Khaima - Ovaj restoran poslužuje samo sljedeće: večera.
Mi Vida - Ovaj restoran poslužuje samo sljedeće: večera. Usluga restorana nije dostupna na dan Petak.
Al Murjan - Ovaj coffee shop poslužuje samo sljedeće: poslijepodnevni čaj.
pivnica - Ovaj buffet restoran poslužuje doručak, ručak i večera.
Dostupna je sljedeća usluga: 24-satna sobna posluga.
Odmaralište ima sljedeće: privatna plaža, marina i zdravstveni klub. 3 vanjski(a) bazeni(a) i dječji bazen nalaze se u hotelu. form of identification must be provided
.
Ovaj gost nije ostavio komentar.
Ovaj gost nije ostavio komentar.
. Pročitajte više. Pročitajte više
. Pročitajte više
Pročitajte više
. Pročitajte više
. Pročitajte više
. Pročitajte više
Pročitajte više
. Pročitajte više
"Le Royal Meridien. Beach Resort & Spa, Dubai"
Ovaj gost nije ostavio komentar.
. Pročitajte više
. Pročitajte više
.
Ovaj gost nije ostavio komentar..
Ovaj gost nije ostavio komentar.
. Pročitajte više
". Pročitajte više
. Pročitajte više
Ovaj gost nije ostavio komentar.
. |
Who’s Stealing Plants Off Main St. In Blytheville, AR?
(Blytheville, AR) “Rip and run” plant thieves are causing a problem along Main Street in Blytheville, Ark.
It’s been going on for a couple of months and more than one business owner has reported plants missing.
Volunteers who spend their time and money beautifying Blytheville are frustrated.
Pat O’Neal is a member of the Mississippi County Master Gardeners.
“We kind of think it may be someone that’s landscaping their yard,” O’Neal said.
She chuckles at her remark, but knows there’s nothing funny about someone stealing plants, specifically young rose bushes, from Main Street in Blytheville.
“I guess they don’t want the real mature ones. They must have too many stickers,” she said.
On almost every corner you’ll see large concrete planters owned by the City of Blytheville, but decorated by Master Gardeners and other local volunteers.
“From crape myrtles and roses to petunias and potato vines,” O’Neal said.
Master Gardeners provides plants, mulch and hard work.
O’Neal recently noticed rose bushes were missing from some of the planters. First one, then two, then more,
“Well, they’re $20 a piece when you buy them at one of the local vendors. And so that kind of adds up,” she said.
So far, it’s cost more than $100 to volunteers who just want to help Blytheville look beautiful.
And it turns out the thieves are “branching” out, targeting some businesses along Main Street.
Grassroots Hair Salon and Tanning just opened two months ago, and owners spent a lot of money to landscape their property.
But as many as thirteen plants were recently pulled up and taken.
McKenzie Ramsdell is a hair stylist who noticed the problem when she came to work one morning.
“There was just a lot of holes in the ground. I mean like where plants use to be. It was kind of strange to notice all the plants were gone,” she said.
Ramsdell says the owner of Grassroots is trying something unique to catch the plant thieves,
“She put a thing on Facebook saying that if anybody could tell her who did it, she’d give them free tanning for a year,” Ramsdell said.
Blytheville Police would like to hear from anyone who’s seen plants being removed from Main Street without authorization.
Most Viewed
Photo Galleries
Viewer Photos For National “Love Your Pet” Day
Lightning Strikes, Minutes Apart, Set Two Homes On Fire |
"Will the lights go out?"
This week there have been two separate stories about the risk of electricity capacity shortfalls:
However are these concerns justified?
The common phrase "will the lights go out" used in BIG's report title is somewhat misleading. As described in more detail in my briefing note at, a shortfall of capacity at time of peak demand would mean a limited number of people being disconnected involuntarily for a limited period of time - this is not desirable, but it is not as bad as "the lights going out" suggests.
High unit prices for extra capacity
BIG further refers to the generating resource being used to meet the last slice of very high demand being paid a very high unit price for electricity. Baseload plant which has higher capital costs and lower fuel costs can make its money back by selling energy year round. However any power system will also have generating units with lower capital cost and higher fuel costs which run much less frequently to meet that last slice of high demand. Without going into the details of exactly how much specific units have been paid by National Grid recently, it is absolutely inevitable that if total payments to such units are divided by the number of kWh produced then this per-unit cost will look very high.
However these units which will only run occasionally to meet very high demands are needed, and their capital costs must be paid somehow - it is important to recognise that the primary service they provide is not year-round energy supply, but providing capacity when needed at times of high demand. Thus it is inevitable that their total income per unit of energy produced will be very high, and this is not a bad thing per se. The correct way to look at these payments is to ask whether there is a more economic way of providing that last little bit of capacity, and expressing concern over the per-kWh payment without this context is missing the point.
"Emergency measures" or standard power market design?
Actions taken by National Grid to ensure an appropriately low risk of capacity shortfalls are referred to in BIG's report (and quite commonly elsewhere) as "emergency measures". The BIG report however does not address clearly how to ensure a reliable electricity supply, if the future prospect of profit through trading energy will not bring enough capacity forward, other than by incentivising additional capacity.
It is certainly the case that variable wind generation makes it harder for the necessary volume of conventional generation to make a return on investment through trading energy alone, as wind reduces the opportunity for controllable generation to sell energy to a greater extent than it reduces the need for the presence of controllable generating capacity. However this issue cannot be blamed solely on the advent of new technologies such as wind - even in an all-fossil fuel system, generation which is needed but which is supposed to make its money by running occasionally at times of very high price is not an attractive investment, due to the lack of a solid bankable revenue stream. What are referred to as "emergency payments" are simply a way of providing that bankable revenue stream, and in one form or another have been a standard feature of power markets around the world for many years.
Differentiated access to energy?
The origins of the reports of differentiated access to reliable electricity supply between customers are unclear. However it may well be that this originated in a discussion of a future system where there are new electrical loads such as electrical vehicles and heating on a mass scale across the country. At present, domestic electricity demand is not very flexible - much of peak demand is lighting, cooking and use of electronic leisure devices, none of which can easily be postponed. However the potential big new demands will be arising from charging electric vehicles (where many people will come home in the evening and need charge the next morning, without caring exactly when it is delivered) and heating (heat can be stored much more easily than electricity).
Thus a glass half empty view would be that unless the time at which vehicles are charged and heat produced is coordinated between customers, then massive new investment in generating capacity and networks will be needed (or alternatively a great fall in the cost of electricity storage technologies would be an alternative). A more optimistic view is that as these new demands are inherently flexible, they can naturally be coordinated to match demand, thus avoiding the need for these investments. As a consequence, a reliable electricity supply in this future world might be thought of as one in which people get what they need by the time they need it, whereas now a reliable supply is thought of as being one where people get exactly what they ask for immediately.
Summary
To summarise, I am not saying that the current electricity market is perfect, and that there are no concerns about future security of supply - however I found these recent media and parliamentary reports somewhat alarmist, and I hope that this article has provided additional context to help understand what lies behind these reports. |
For many years, Uzi Ben-Ami, Ph.D., has been a practicing psychotherapist in Rockville, Maryland. Uzi Ben-Ami, Ph.D., has treated numerous individuals struggling with the effects of grief and loss.
When a loved one dies, the bereaved person’s mind often goes into a state of shock and detachment, which serves to separate him or her from the reality of the loss. To many people, this feels like numbness or living in a fog. Sometimes, they feel that they are moving through the world on autopilot, making final arrangements without fully experiencing what is going on.
Many people find this period of numbness extremely frustrating, as it causes them to struggle with daily tasks, to focus on work or school requirements. It is important to remember, however, that shock functions as an adaptive mechanism that allows the reality of the situation to set in slowly. Many people find that the pain of loss descends gradually as the numbness dissipates so that the full experience of grief does not arrive all at once but in manageable stages.
Experts recommend that a grieving person make his or her way through numbness and detachment slowly and with plenty of self-compassion. For some it may be necessary to take some time away from the world to rest, though most people find it more soothing to actively participate in funeral arrangements and proceedings or partake with post funeral bereavement rituals in a group to the best of their ability, so that when the fog lifts, they feel others understood and supported them, that they did not forget their loved one and had a chance to share their grief with caring family and friends. |
Okay, just got back from vacation yesterday. Now it is week 7 of the College football season. the cream is starting to rise to the top. And some of that cream is is somewhat unexpected, or at least the "usual" cream...except for Alabama. The unexpected being mostly TCU in the Big 12.
And who do you like to turn the improbable into the probable this week. My not so probable from last week was Tulane playing in a game where 62 points were scored by one team...and winning. Also, what's up with Washington St, the used to be punching bag of the PAC-12, leading that conference. Uh, go Mike Leach.
I graduated from the Tulane University of Louisiana, which before it gained sponsorship from Paul Tulane was the University of Louisiana, and thus it remains so. I am roundly insulted that an upstart directional school would try to usurp the name in an attempt to sound as if they really belong. I'm insulted that ESPN could be duped into thinking that they are covering my Alma Mater.
I graduated from the Tulane University of Louisiana, which before it gained sponsorship from Paul Tulane was the University of Louisiana, and thus it remains so. I am roundly insulted that an upstart directional school would try to usurp the name in an attempt to sound as if they really belong. I'm insulted that ESPN could be duped into thinking that they are covering my Alma Mater.
3...2...1...
ULL is the largest University with the most fields of study in the Louisiana system, second largest in state behind LSU. Its endowment(180m) is twice that of La Tech(90m), five times that of SLU(35M), three times that of UNO(60m). More directly compared to ULM which it has twice the number of students(17,000 to 8500) and NINE TIMES THE ENDOWMENT (180m to 20m). It is 120 years old, which is about the same as Texas, Oklahoma, Oklahoma state, and other flagship schools in the region. By those measures it is clearly the top(flagship) school in the University of Louisiana System and should properly have the name of the University of Louisiana.
If Prof. agrees with what I said then yes, he is right. I am perfectly fine with agreeing with him on many things.
I graduated from the Tulane University of Louisiana, which before it gained sponsorship from Paul Tulane was the University of Louisiana, and thus it remains so. I am roundly insulted that an upstart directional school would try to usurp the name in an attempt to sound as if they really belong. I'm insulted that ESPN could be duped into thinking that they are covering my Alma Mater.
3...2...1...
ULL is the largest University with the most fields of study in the Louisiana system, second largest in state behind LSU. Its endowment(180m) is twice that of La Tech(90m), five times that of SLU(35M), three times that of UNO(60m). More directly compared to ULM which it has twice the number of students(17,000 to 8500) and NINE TIMES THE ENDOWMENT (180m to 20m). It is 120 years old, which is about the same as Texas, Oklahoma, Oklahoma state, and other flagship schools in the region. By those measures it is clearly the top(flagship) school in the University of Louisiana System and should properly have the name of the University of Louisiana.
If Prof. agrees with what I said then yes, he is right. I am perfectly fine with agreeing with him on many things.
I graduated from the Tulane University of Louisiana, which before it gained sponsorship from Paul Tulane was the University of Louisiana, and thus it remains so. I am roundly insulted that an upstart directional school would try to usurp the name in an attempt to sound as if they really belong. I'm insulted that ESPN could be duped into thinking that they are covering my Alma Mater.
3...2...1...
ULL is the largest University with the most fields of study in the Louisiana system, second largest in state behind LSU. Its endowment(180m) is twice that of La Tech(90m), five times that of SLU(35M), three times that of UNO(60m). More directly compared to ULM which it has twice the number of students(17,000 to 8500) and NINE TIMES THE ENDOWMENT (180m to 20m). It is 120 years old, which is about the same as Texas, Oklahoma, Oklahoma state, and other flagship schools in the region. By those measures it is clearly the top(flagship) school in the University of Louisiana System and should properly have the name of the University of Louisiana.
If Prof. agrees with what I said then yes, he is right. I am perfectly fine with agreeing with him on many things.
Except, you know, legally it can't, so there's that.
Sounds like they could call themselves The Second Largest University in Louisiana, tho...
If everyone called them the University of Louisiana it would benefit them as much as that sort of name has benefited the University of Ohio. Personally, I'm glad that the U of L thing has gained a little bit of traction. Why? Because those people (ULL fans) are every bit as miserable now as they were when they were laboring under the perfectly good handle of USL.
_________________You want experience running the entire show? You got it. You want someone young and energetic? You got it. You wanted someone with absolutely no ties to Cowen/Dickson? You got it. - TU23
TCU is off to a great start but this is usually when the hot, surprising team from the Big 12 drops one.
South Carolina at Tennessee | 12 p.m. | ESPN
This is one of Butch Jones' last chances to stop the momentum running him out of town.
Purdue at No. 7 Wisconsin | 3:30 p.m. | BTN
This is a sneaky great game. It should be on a bigger platform than the BTN.No. 10 Auburn at LSU | 3:30 p.m. | CBS
The home field always matters in this one. If you want Ed Orgeron to get time enough to really do a job on LSU football, root for him.
No. 25 Navy at Memphis | 3:45 p.m. | ESPNU
I'd be stunned if this is not one of the two or three best games of the weekend.
Middle Tennessee at UAB | 6:30 p.m.
Bill Clark is a very underrated head coach. This should be a letdown spot for them but if it's not then Clark is a very, very underrated coach.
Texas A&M at Florida | 7 p.m. | ESPN2
As bad as Kevin Sumlin looked at the start of the year, he's doing much better. If A&M still wants him out he should be able to get another P5 gig easily.
UCLA at Arizona | 9 p.m. | PAC12
...like the loser of this one. I think RR wins this big. Mora Jr is on his way back to the booth, probably after next year, though.
Boise State at No. 19 San Diego State | 10:30 p.m. | CBSSN
SDState is running out of chances to lose. The AAC needs to get to that top G5 bowl slot most years. We need them to lose. But this ain't your older brother's Boise.
Oregon at No. 23 Stanford | 11 p.m.
The loser of this one will have 3 losses at the midpoint of the season.
_________________You want experience running the entire show? You got it. You want someone young and energetic? You got it. You wanted someone with absolutely no ties to Cowen/Dickson? You got it. - TU23
what is their actual name? The Ohio University of Ohio/Brake Shop/Massage Parlor (se habla espanol)?
_________________You want experience running the entire show? You got it. You want someone young and energetic? You got it. You wanted someone with absolutely no ties to Cowen/Dickson? You got it. - TU23
Just tuned in the ULL - TX State game. It is weird to hear the announcers say 'Louisiana' when talking about ULL. Looking at the endzones, there is no extra space except for the spelled-out LOUISIANA. Their helmets say Louisiana, the field, the announcers - it's almost comical. If you've got to scream it no one really listens. Be cool if Monroe would call themselves Louisiana as well. Louisiana College still exist in Alexandria?
Just tuned in the ULL - TX State game. It is weird to hear the announcers say 'Louisiana' when talking about ULL. Looking at the endzones, there is no extra space except for the spelled-out LOUISIANA. Their helmets say Louisiana, the field, the announcers - it's almost comical. If you've got to scream it no one really listens. Be cool if Monroe would call themselves Louisiana as well. Louisiana College still exist in Alexandria?
Just tuned in the ULL - TX State game. It is weird to hear the announcers say 'Louisiana' when talking about ULL. Looking at the endzones, there is no extra space except for the spelled-out LOUISIANA. Their helmets say Louisiana, the field, the announcers - it's almost comical. If you've got to scream it no one really listens. Be cool if Monroe would call themselves Louisiana as well. Louisiana College still exist in Alexandria?
Pineville, I believe
So when one drives from Pineville into Alexandria or vice-versa - is it obvious?
It is weird to hear the announcers say 'Louisiana' when talking about ULL. Looking at the endzones, there is no extra space except for the spelled-out LOUISIANA. Their helmets say Louisiana, the field, the announcers - it's almost comical. If you've got to scream it no one really listens.
You do know that they decided to be called Louisiana, and the rest of the country outside of Tulanians doesn't care and just goes, "okay, they're Louisiana". They're not trying too hard, it's just what they believe and no one else cares except a few Tulanians here.
Just tuned in the ULL - TX State game. It is weird to hear the announcers say 'Louisiana' when talking about ULL. Looking at the endzones, there is no extra space except for the spelled-out LOUISIANA. Their helmets say Louisiana, the field, the announcers - it's almost comical. If you've got to scream it no one really listens. Be cool if Monroe would call themselves Louisiana as well. Louisiana College still exist in Alexandria?
Pineville, I believe
So when one drives from Pineville into Alexandria or vice-versa - is it obvious? |
THE COLONY, Texas (CBSDFW.COM) – A 19-year-old man is charged with capital murder after police in The Colony said he confessed to killing his older sister who was eight months pregnant.
Police identified the victim as 23-year-old Viridiana Arevalo.
Her brother, Eduardo Arevalo, remained in The Colony Jail Monday evening without bond and was still waiting to go before a judge.
Their oldest brother, Diego Arevalo, defended Eduardo by saying he doesn’t believe he would have killed their sister.
“I know my brother, he wouldn’t do something like this. He’s very kind, very positive kind of guy, very motivated. He helped my family out, he helped my brothers, he even helped my sister out,” he said.
But police told a very different story and said Eduardo confessed Sunday night to strangling his sister last Monday morning when they were alone in the house they lived in with their parents and two younger siblings.
Investigators say he told them he buried her body an hour north from The Colony, but then retrieved it early Sunday morning and dumped it in an alley, less than a mile from their family’s house.
Sgt. Aaron Woodard said, “The only reason he gave for killing her was that she was an embarrassment to her family and he stated it would be better off but she wasn’t here.”
Diego Arevalo said Viridiana had suffered from depression, but had been feeling better recently.
Sgt. Woodard said a suicide note was found in the home, but that her brother claimed responsibility for that too.
“He later confessed to having written the note and the information we have is that it was implied she wrote the note,” said Sgt. Woodard.
Diego Arevalo said the family is devastated. “Seeing my parents sad and emotional really breaks my heart.”
He said his sister had a lot of love in her heart and that she was looking forward to having a baby girl. “She was excited. She always wanted a sister. She was the only sister in the family. She wanted a little sister but it never happened.”
Police said Viridiana’s boyfriend reported her missing Tuesday evening, and is not accused of any wrongdoing.
Sgt. Woodard said Eduardo Arevalo may face an additional criminal charge. |
The UK's largest Science Fiction & Fantasy Forums
News Feed
News
Forum
Books
Film
People
TV
Blogs
Recent Entries
Best Entries
Best Blogs
Blog List
Search Blogs
Science Fiction Fantasy Chronicles: forums
>
Film & TV
>
Featured TV Series
>
Lost
Lost season 2 finale... More Questions, not answers!?
User Name
Blogs
Social Groups
Calendar
Search
Today's Posts
Mark Forums Read
Lost
For discussions of the TV series Lost - seasons 1 onwards.
Thread Tools
Rate Thread
29th May 2006, 04:03 AM
#
1
(
permalink
)
dreamwalker
Starship Manufacturer
Join Date: Aug 2005
Posts: 332
Lost season 2 finale... More Questions, not answers!?
I?????????
29th May 2006, 09:59 AM
#
2
(
permalink
)
weaveworld
~Behold my sparklies!~
Join Date: Sep 2005
Posts: 587
Blog Entries:
4
Re: Lost season 2 finale... More Questions, not answers!?
I'll be honest and help you at the same time.
Don't think about it, it will kill your brain. I have loads of un-answered questions as well (even though I have only seen the first three episodes).
I cheated and asked a friend in the states, 'what the heck happened to Desmond?'.
No-one knows. (as you pointed out)
The Black Swarm thingy, I think was a defence mechanism too.
I have one burning question? Why did Desmond go so willingly into the Hatch?
Don't worry there will be loads more questions real soon.
Ps There were 26 episodes in the first series x
29th May 2006, 04:01 PM
#
3
(
permalink
)
Milk
Registered User
Join Date: May 2006
Posts: 108
Re: Lost season 2 finale... More Questions, not answers!?
When I saw the statue first thing that came to mind.
My theory of the plot that answers some of those questions all at once.
possible spoiler from the last episode of the season:
The 4 toes statue suggests that the island they are on might be Atlantis or something similiar. Which might explain the mysterious magnetic source as well as the black cloud guardian, its some advanced technology from the past/ or an offshoot of humanity , or ugh aliens or whatever atlantis means in regards to this story. Maybe the Dharma initiative people (who are the others) discovered some technology from atlantis that could destroy the Earth on the island. Their plan is to reseed the Earth after causing general humankind to be destroyed and then rebuild it, to do so they want to test everyone on island to see if they are good or evil, hence all the testing, and mind games, and deception and spying.
One of the products of the Dharma initiative could also have been isolating the 'good' and 'evil' gene or something similiar as a virus. I think there will be some technology that the dharma group has relating good and evil, I mean like for instance isolating the 'evil' gene this is why they are the 'good' guys their intentions seem altruistic, to save humanity--by destroying it. Its obvious morality and technology will combine somehow literally.
1st June 2006, 06:16 PM
#
4
(
permalink
)
Trey Greyjoy
Moderates UNITE!
Join Date: Aug 2005
Posts: 622
Re: Lost season 2 finale... More Questions, not answers!?
Thinking about all the possibilities makes my head hurt too!
Signs are beginning to point to some sort of "Land of the Lost" scenario or variant such as "Atlantis" "Bermuda Triangle" "secret entrance through the north pole" etc. They seem to be somewhere tricky to get in and out of.
If I have to guess right now what the plot is, I would say some wealthy multinational corporation or collection of wealthy individuals knows of this phenomenon and is exploiting it in someway.
The "others", if they are truly the good guys, may be working to try and stop them.
I believe Desmond, Locke and Eko are still alive and that Penny will play an important role next season.
14th June 2006, 10:41 PM
#
5
(
permalink
)
Thadlerian
Rattus Norvegicus
Join Date: Jun 2005
Posts: 861
Re: Lost season 2 finale... More Questions, not answers!?
I just think it was hilarious when it turned out Desmond's last name was Hume...
Now they've got Locke, Rousseau and Hume. Where's Montesquieu? That guy with the false beard, is he Marx?
I think this whole naming affair is a little silly. It's obviously done to make the show seem more deep and philosophical, but the characters hardly bear any resemblance to the philosophers after whom they're named....
21st June 2006, 12:52 AM
#
7
(
permalink
)
Dianora
beautiful disaster
Join Date: Jun 2006
Posts: 63
Re: Lost season 2 finale... More Questions, not answers!?
you know, I really, REALLY tried not to get into this show, but my husband is addicted, and by default, since I am usually in the same room while he's watching it... I got hooked. No clue what they are going to do with next season...
21st June 2006, 06:09 PM
#
8
(
permalink
)
Green
Sick and Tired
Join Date: Jun 2005
Posts: 808
Re: Lost season 2 finale... More Questions, not answers!?
It'll be more of the same. Some questions answered, some not. They'll keep making them as long as people keep watching them. So if they're going to get to seven or eight series, it wouldn't make much sense to give out all the answers straight away.
As for frantically wondering what's going to happen... just chill out and wait for the new series. There's no real point speculating or demanding answers, you'll only get disappointed.
22nd June 2006, 10:06 PM
#
9
(
permalink
)
NorseDude
Registered User
Join Date: Jun 2006
Posts: 10
Re: Lost season 2 finale... More Questions, not answers!?
Don't worry. You'll be disappointed anyway. It's only a matter of time before they mess it all up anyway if you still believe they haven't alreay.
7th July 2006, 11:44 PM
#
10
(
permalink
)
Werthead
Lemming of Discord
Join Date: Jun 2006
Posts: 1,123
Re: Lost season 2 finale... More Questions, not answers!?
The ongoing 'Lost Experience' game is filling in the backstory on the Hanso/DHARMA side of things, which I'll spoiler-summarise in a second. But generally:
The Monster/Black Cloud: a security mechanism to defend the island. From what is unclear. It can read minds. According to the producers, we saw it again in Season 2 after it's obvious appearance mind-scanning Eko in The 23rd Psalm (Episode 10). The common theory is that it can also take on other forms, and appeared as Eko's brother to guide him and Locke to the Pearl Station. Why? Who knows? It may also have appeared as Hurley's imaginary friend, Kate's horse etc. The monster is code-named Cerberus and apparently escaped from the Flame Station during 'The Incident' (this info is revealed on the map on the rear of the blast door, seen in Episode 17, Lockedown).
The Island: almost definitely in the Pacific just off the direct-line route from Sydney to LA, just north-east of Indonesia at 4.815 S Lat, 162.342 Long. The EMP field completely blocks the island off from the outside world, rendering it invisible to rader, radio etc. There seems to be a 'corridor' permitting contact with the outside world, through which signals occasionally escape (this wouldn't stop the island being observable from satellites etc). The island is probably about 10-15 miles across.
Right, now to the info from the Lost Experience. Highlight to read:
Quote:
In the 1870s a Norwegian man named Magnus Hanso established a slaving business using three ships, including his flagship, the
Black Rock
. In 1881 the
Black Rock
disappeared without explanation whilst on a slaving run to Mozambique. The prevailing theory is that Hanso discovered the island and its unusual magnetic properties. He may have managed to escape the island and left the location with his family, but later returned. The
Black Rock
was carried ashore by a massive tidal wave (possibly caused by the eruption of Krakatoa in 1883?). Magnus Hanso died on the island and was buried by his crew near the wreck.
The Hanso family abandoned the slave business and became industrialists. During WWII they became arms magnates under the young Alvar Hanso, who supplied arms to resistance movements across Europe. After WWII Hanso became more interested in scientific development. He may have discovered his ancestor's 'hidden island' at this time.
In 1962, after the resolution of the Cuban Missile Crisis, an Italian mathmatician named Enzo Valenzetti was called upon by the UN to develop a method for predicting the end of human civilisation. He succeeded after many years work, but by this time the UN seems to have abandoned the idea as a bit daft. The Hanso Foundation picked up on it and started developing the equation themselves. Valenzetti disappeared (or was killed) in the late 1990s/early 2000s and the Hanso Foundation continued developing the equation using a group of 'autistic savants' (geniuses) at the Vik Institute in Iceland. The Vik Institute burned down very recently in the Lost Experience timeline and the savants were all killed, much to the Foundation's disquiet.
The Foundation also adopted a group of US scientists keen to find a place where they could develop scientific studies in peace in the early 1970s. They brought this group under the Hanso banner as the Department of Heuristics And Research on Material Applications (DHARMA) and Hanso placed them on the island his ancestor found, where they built a number of research facilities. An 'incident' took place in the early 1980s which resulted in the destruction of one of the hatches (The Flame) and possibly the 'escape' of their advanced security system, code-named 'Cerberus'. This incident also allowed the very-high electromagnetic field of the island to begin building to dangerous levels. Through an unknown method, the DHARMA Initiative was able to put a 'pressure valve' on this field which could be gradually released by entering a code into a computer every 108 minutes.
In 1987 the Hanso Foundation abruptly abandoned the island, shut down the DHARMA Initiative and left the scientists abandoned on the island whilst sending in personnel to continue pressing the button. The abandoned scientists became 'The Others' (this part is speculation, but it seems pretty logical).
In 2001 Desmond Hume crashed on the island (probably due to the machinations of his fiancee's father) and was recruited into pressing the button. Shortly afterwards, his finacee Penelope Widmore started looking for him. Widmore Industries may be the company that built the Hatches and thus have known of the location of the island, so Penny may have found out about its unusual EMP properties. She established a two-man 'EMP listening post', probably near the Arctic, to watch for signs of EMP activity.
In 2003 Alvar Hanso disappeared without a trace, although the Foundation continues to claim he is alive and well 'behind the scenes'. On 22 September 2004, during a partial system failure of the EMP containment system, Oceanic Flight 815 was pulled out of the sky and crashed on the island. On 26 November 2004 the 'valve' was destroyed by the actions of John Locke. Desmond triggered a failsafe which released all of the leaking EMP at once, but probably destroyed the Swan Station on the island. This is the 'present' date in the TV show (the date of the Season 2 finale and presumably the Season 3 opener).
The Lost Experience reveals that an advanced ocean-going vessel has been built by Paik Industries (owned by Sun's father) for the Hanso Foundation and that Hanso's right-hand man, Dr. Mittelwerk is planning to use it to travel to an "island located near the equator" (such as 4 degrees south of the equator?). A PI named
Rachel Blake
has exposed much of this information, but was unable to follow the ship after it left Italy. The general theory is that the ship has been modified to penetrate the island's EMP field. If the Experience is taking place between Seasons 2 and 3, it may be setting up the arrival of new Hanso personnel onthe island in the forthcoming season.
It is unclear if the Lost Experience is taking place 'now' (i.e. in 2006, two years after the events of Lost Seasons 1 and 2) or at the same time as the show. The former is supported by many of the hidden documents and websites part of the game bearing 2006 dates, the latter by events in the Experience (Dr. Mittelwerk is agitated by an 'unforseen event' that is possibly the EMP flare seen in the Season 2 finale, which galvanises his search of the Hanso archives for the location of the island)..
8th July 2006, 10:32 PM
#
11
(
permalink
)
dreamwalker
Starship Manufacturer
Join Date: Aug 2005
Posts: 332
Re: Lost season 2 finale... More Questions, not answers!?
Quote:
Originally Posted by
Werthead.
I can now rest happily, and sleep restfully
I don't think i'll be trying the game any time soon but it does go on to answer just how many lose ends the final answers will leave us.
«
LOST series 2 - UK people!!!
|
Lost: Season 2
»:17 AM
.
-- chronicles-network forum main
-- chronicles-network all posters
-- Chronicles Network
-
Science Fiction and Fantasy Chronicles Network home
-
Top
SEO by vBSEO 3.2.0 ©2008, Crawlability, Inc. |
Welcome to T&S Vapors
Mission Statement:
To help the entire world quit smoking while providing an exceptional product and customer service experience.
Who Are We?
We are a leading vapor company dedicated to offering the latest in vapor technologies to satisfy your vapor needs. We are devoted to keeping up with the latest trends and researching diligently to find the best products on the market. We literally have a vaporizer for every kind of vapor alive.
Customer service is our number one modo and shines in all four locations. We hold the entire T&S Vapors™ team to an extremely high standard of customer service.
Four convenient locations in Albuquerque, New Mexico with friendly staff ready to help you with your journey to quit smoking. Click here for hours, contact details, and locations.
We carry a large variety of products including:
- Accessories
- Batteries
- Box Mods
- Electronic Cigarettes (E-cigs)
- DIY Flavors
- 100s of E-Liquids
- RDAs
- Repair parts
- Sets
- Tanks/Subtanks
- Vaporizers
Come and visit Albuquerque, New Mexico’s premier vapor shops!
Deals
10% Off E-liquid Every Fill-Up:
We support saving the planet. So in order to help doing so every time you bring back your bottle for a refill we will give you 10% off the e-liquid purchase price. Our way of saving you money and encouraging everyone to recycle.
Weekly Sale:
Every week (Tuesday-Sunday) we offer a special deal on select vapor products! Deals are posted to our Facebook page every week. Click here to see our Facebook page:
Rewards Program:
Every dollar you spend at T&S is one point on your account. At 100 points you will earn a 30ml bottle of e-liquid as a special thank you. Points never expire and they are redeemable at all four locations!
Why Shop With T&S Vapors®
- Exceptional Customer Service
- Competitive Pricing
- Rewards Program
- Weekly Sales
- Large Variety of The Latest Vapor Products
- Family Owned
- Very Knowledgeable Staff
Where To Find A T&S Location |
Using paper towels in a toilet at an Indian airport feels surreal to those who have grown up in another India. There is a wholly unfamiliar sense of crisp hygiene that accompanies its use and one can only imagine how many illnesses have been prevented by this one simple facility. The filthy communal towels of another time spring to mind, when we swapped germs with each other with cheerful impunity.
Growing up, a visit to a public facility of this kind was a battle between two primitive urges- the pressing need to relieve oneself against a fervent desire to protect one’s senses from olfactory assault. Toilets in these spaces were filthy, smelly, and did not concern themselves even remotely with the question of hygiene. The problem for women was infinitely worse. Public spaces of all descriptions- airports, railway stations, markets, stadia, public lavatories, offices- were unself-conscious repositories of the sights and smell of our collective lives.
This has begun to change, particularly in many public spaces in the larger towns. And clean and well-maintained toilets are only part of the story. New public spaces increasingly have a new aesthetic, with order and cleanliness being key pillars of their design. Although their origins are by no means organic, having been transplanted from global sources, with time, modernity becomes contagious as one public space influences another. The change seen is in proportion to the affluence levels of the users of these spaces- malls and airports lead the way, while the railway station shows modest signs of change. Platforms have started getting escalators but the toilets in trains continue to deposit their fertile produce on railway tracks across the country.
In many cities, the difference between the three cities- the old, the intermediate and the new is becoming quite pronounced. The old city created spaces where the dominant presence is that of people, for these are spaces designed for close human interaction. Barring a few cities, where the older part of town is being reconfigured as a consumer product by converting it into a touristy simulation of itself, in most cases the old town has simply stopped being the centre of business activity. Designed to be navigated on foot, it still bustles, but more out of habit than need.
The intermediate city, was modern once, but over the years has achieved the run down and highly lived-in look that most Indian spaces eventually start tending to. Unlike the closed nature of modern public spaces, the intermediate city is full of structures that are open- markets in colonies, public parks, schools and colleges of newer vintage, being some examples. Modernity here is signified by a basic level of organisation, and the presence of some rudimentary common facilities.
Today’s modern city creates spaces that begin by cutting themselves off from the environment around them, and constructing an artificial internal habitat that can be built from scratch. The outside world ceases to exist, and a climate-controlled bubble where time and space lose texture and shape is created. Commerce is at the heart of most of these spaces; the idea of development involves more opportunities to consume. In a branded environment, even familiar names feel like facsimile reproductions- extensions of choices made popular elsewhere.
The coming of these new kinds of spaces has meant that over the last few years, urban Indians have had to retrain their eyes. While the new aesthetic of the public space can legitimately be described as derivative, it succeeds in achieving its primary goal- replacing the messy and chaotic with surface visual order. Glass is everywhere, and straight clean lines are the new standard that is aspired to. If the palace was the ultimate visual benchmark earlier, today it is the mall.
As our eye changes, so do our homes. The desire to unclutter our living spaces, and to wear a veneer of slickness can be seen in the way the new apartments are imagined. The use of glass and marble, the growing popularity of modular kitchens, and the popularity of veneers of all kinds point to the overflow of influence from the public to the private. Shape is becoming imagined more fluidly, and the home is beginning to speak on behalf of its residents more deliberately.
It is also true that the overhauled public spaces are for most part not truly public. Only a certain section of society gets access to these, either on account of affordability or by virtue of being from the wrong social class. Genuinely open public spaces where people from all sections of society can mingle are rare. Modern public spaces achieve their modernity in part by becoming less public.
The new public spaces of today emit an absence of a message that is striking. The use of glass as a sign of modernity is an admission that modernity does not need content; its aim is to signify nothing more than its intention to be modern. It gathers no references or allusions in its fold nor does it speak of the place or the people it is meant for. By being a line of demarcation, rather than an instrument of communication, it serves as an inarticulate monument to itself. It is not the past, and the future that it represents is one where there is little to touch or feel. But the stickiness of a past that would not let go is being successfully shaken off.
With time, perhaps public spaces will evolve from merely erasing the past to creating more enriching experiences. Mumbai’s new airport is a great example of using a public space to create a stunning gallery of diverse artistic experiences. Here space is infused with meaning. It is eloquent and distinctive. Today, it would seem that the choice that needs to be made is between order and diversity, when it comes to designing new public spaces. Perhaps, letting go this of this binary might be a good starting point for creating more meaningful and enriching modern spaces. |
The present invention relates to pressure relief valves and, more particularly, to pressure relief valves which provide an indication of the pressure within the high pressure cylinder to which the pressure relief valve is attached.
Conventional cylinders which house a fluid under pressure, whether the fluid be a liquid or gas, include a conventional valve for controlling the outflow of the fluid and an upstream located pressure relief valve. These cylinders, generally referred to as bottles, are usually filled at a depot to a predetermined pressure, which pressure equates with the quantity of fluid contained therein. During use of these bottles, pressure gauges are sometimes not employed and the quantity of the contents within the bottles is not always accurately known. Accordingly, the bottles are generally returned to the depot for refilling. Prior to refilling of the bottles, they are generally evacuated (perusant to federal regulations); thus, a user who returns for refilling of partly filled bottles will lose the benefit of the unused contents. This "lost cost factor" can be substantial over a period of time. Unnecessarily, the users of the bottles often waste time and effort in returning nearly filled bottles; moreover, sometimes the users misjudge the quantity of contents remaining and run out of fluid at inopportune moments.
It is of course possible to attach conventional gauges to the bottle and thereby obtain an accurate indication of the quantity of fluid remaining. However, the attachment of such gauges is time consuming. Another method of determining the contents of each bottle is that of weighing the bottle. However, such weighing requires accurate scales and detachment of the bottle from any equipment it might be attached to.
It is therefore a primary object of the present invention to provide a means for obtaining an indication of the pressure of a fluid within a high pressure cylinder.
Another object of the present invention is to provide an indication of the pressure of a fluid within a high pressure cylinder without tapping the cylinder.
Yet another object of the present invention is to provide a means for electrically determining the pressure of a fluid within a high pressure cylinder.
Still another object of the present invention is to provide a means for determining the pressure of a fluid within a high pressure cylinder with readily detachably attachable equipment.
A further object of the present invention is to provide a pressure relief valve having the capability of providing a signal reflective of the contents of a high pressure cylinder to which the pressure relief valve is attached.
A yet further object of the present invention is to provide a detectable impedance valve within a pressure relief valve, which impedance valve is reflective of the pressure acting upon the valve.
A still further object of the present invention is to provide a means for applying a conventional rupturable curved disc in a pressure relief valve as one plate of a capacitor, the impedance of which capacitor varies as the disc configuration varies due to pressure changes.
A still further object of the present invention is to provide an inexpensive pressure relief valve which has the capability of providing an indication of the pressure acting upon the valve.
These and other objects of the present invention will become apparent to those skilled in the art as the description thereof proceeds.
The present invention may be described with greater specificity and clarity with reference to the following drawings, in which:
FIG. 1 illustrates a modified pressure relief valve mounted in the stem of a valve connected to a high pressure cylinder;
FIG. 2 is a cross-sectional view taken along lines 2--2, as shown in FIG. 1
FIG. 3 is a cross-sectional view taken along lines 3--3, as shown in FIG. 2; and
FIG. 4 is a cross-sectional view taken along lines 4--4, as shown in FIG. 2.
Referring to FIG. 1, there is shown a conventional high pressure cylinder or bottle 10 which might contain a fluid, such as oxygen, argon or other atmospheric gases. A conventional valve assembly 12, including an outlet pipe 14, is generally permanently attached to the bottle. It is to be understood that many configurations serving the function of valve assembly 12 are in commercial use. For most fluids, federal regulations require that a relief valve be attached to bottle 10 to prevent explosion in the event the pressure of the fluid within the bottle exceeds the pressure retaining capacity of the bottle. Therefore, most permanently attached valve assemblies also include a pressure relief valve, as indicated in FIG. 1 by the numeral 16.
In order to determine the degree of fill of bottle 10, a pressure gauge is generally used and from a pressure reading, the degree of fill can be calculated. The attachment of a pressure valve, such as to outlet pipe 14, is somewhat time consuming and necessitates a loss of fluid upon removal of the pressure gauge. For some fluids, such loss is inconsequential but where toxic or poisonous fluids are released, severe health hazards may be present. Additionally, some financial detriment results from the loss of fluid. But aside from these losses, the necessary time for an operator to attach a pressure gauge, obtain a reading therefrom and then detach the pressure gauge represents a substantial labor expense which should be avoided if possible.
As pressure relief valve 16 is necessarily always in fluid communication with the interior of bottle 10, the conventional rupturable element contained therein is responsive to the ambient pressure, generally by flexing. Should the pressure within bottle 10 increase beyond a specified upper limit, the flexing capability of the element will have been exceeded and it will rupture. Upon rupture, the fluid will flow through the rupture and be dissipated through relief ports 18 disposed in the pressure relief valve.
As the rupturable element flexes in response to pressure variations, such flexing, if the element constitutes one plate of a capacitor, produces a change in capacitance, or impedance, of the capacitor. By maintaining the second plate of the capacitor in fixed position, the variation in impedance of the capacitor due to the flexing element can be sensed by impedance responsive circuitry.
Referring still to FIG. 1, there is shown an electrical conductor 20 electrically attached to the housing of pressure relief valve 16, which housing is electrically attached to the flexing element. An electrical conductor 22 is electrically attached to lead 24 extending from the fixed plate of the capacitor. A sensing circuit 26 is responsive to a variation in an electrical signal across electrical conductors 20 and 22 resulting from a change in impedance of the capacitor. The response sensed may be displayed upon a meter 28 to reflect either the degree of pressure or thus the quantity of fluid within bottle 10.
Referring jointly to FIGS. 2, 3, and 4, the constructional details of pressure relief valve 16 will be described. A collar 30 threadably engages a hollow stem 32 extending from outlet pipe 34 of valve assembly 12. Collar 30 by means of annular shoulder 36 and a malleable annular seat 38 sealingly secures a rupturable flexible curved disc 40 across outlet 42 of stem 32. Thereby, leakage through stem 32 will not occur unless disc 40 ruptures. In the event disc 40 ruptures, the fluid flow through the disc will be dissipated through relief ports 18 extending through the shank of collar 30.
Generally, disc 40 is of beryllium copper which has electrical properties suitable for employing the disc as one plate of a capacitor. An electrically insulating centrally apertured plug 44 is in threaded engagement with the interior surface of shank 46, which shank forms a part of collar 30. Central aperture 48 within plug 44 supports pedestal 50, one end of which includes an annularly expanded plate 50. Surface 54 of plate 50 in proximity to disc 40 may be curved to conform in general with the curvature of the disc; alternatively, this surface may be planar. Necessarily, pedestal 50 and plate 52 must be of electrically conductive material to render the plate capable of performing the function of a plate of a capacitor.
The degree of capacitance of the capacitor formed by disc 40 and plate 52 is variable by threading plug 44 into or out of shank 46 until a value commensurate with sensing circuit 26 is obtained. Once the degree of capacitance is achieved, the positional relationship of plate 52 to disc 40 is set by lock nut 56 threadably engaging plug 44 and the end of shank 46.
Electrical conductor 22 is electrically attached to pedestal 50 by solder 58 or by other conventional means. Electrical conductor 22 may be electrically attached to disc 40 by means of a bolt or machine screw 60 maintaining a tab 62 in electrical contact with collar 30.
As discussed above, as the pressure of the fluid within bottle 10 varies, disc 40 will have greater or lesser curvature in proportion to the pressure of the fluid. Such a difference in curvature will result in a section of the disc being in greater or lesser proximity to surface 54 of plate 52. This difference in spacing therebetween varies the impedance of the capacitor. Any impedance variation is sensed by sensing element 26 and meter 28 provides a visual indication of the change. The visual indication may be calibrated in either units of pressure or units of quantity of fluid.
The safety aspects of pressure relief valve 16 are not jeopardized by the above modifications thereto. As will be noted by inspection of FIG. 2, a cavity exists external to disc 40, which cavity is in communication with relief ports 18. Thus, the fluid, on rupture of the disc, will flow into the cavity from whence it will dissipate through the relief ports. Thus, all federal and state regulations pertinent to relief valves have been accommodated.. |
AV Software Pvt. Ltd. (AV Soft), a Mumbai-based distributor of
world-leading business intelligence software solutions for businesses, end-customers, government and educational organizations, today announced availability of System Mechanic® 16, the new version of iolo technologies iconic product, in India, Nepal, Sri Lanka and Bangladesh.
With a revolutionary new product framework designed to fully capitalize on the major advances in the Windows® OS beginning with Windows 7, System Mechanic 16 is the most powerful PC accelerator available today..
PC Magazine has awarded System Mechanic their Editors’ Choice award seven years and counting. System Mechanic 16 has recently earned the Top Ten Review Gold Award for best PC system utility as it performed better than all the other products reviewed in almost every category.
“We are excited to be partnering with AV Soft on the launch of System Mechanic 16 in India. PC Magazine has consistently awarded System Mechanic their Editor’s Choice award, making it the most effective and powerful PC accelerator available in the market today”, says Malika Bounaira, Director of International Sales at iolo technologies. “By relying on AV Soft’s distribution network, Indian users will be able to take advantage of System Mechanic’s advanced technology features and experience their best PC performance ever.”
Commenting on the launch of System Mechanic 16 in the Indian market, Rajiv Warrier, Managing Director at AV Soft, says: “Experts know well that System Mechanic, which fully supports Windows 10, proprietary technology to inject even more horsepower back into the system for faster startup speeds, web surfing, gaming, and other high-performance computing activities”.
So what helps new System Mechanic 16 scan and repair PC many times faster than any prior version?
Much Faster System Scans.
Redesigned Toolset.
More Frequent Updates The new iolo Smart Updater delivers nearly instantaneous updates to ensure you receive the most recent version automatically and seamlessly. Individual components and features can now be updated without a rerelease of the entire System Mechanic product. This allows for more frequent product enhancements based upon user feedback.
Next-gen Tune-up Definitions The next generation of Tune-up Definitions is now capable of continuous, real-time updates that find many effective new optimizations for today’s modern apps. The result is the elimination of many more types of slowdown, especially in Windows 10. Startup Optimizer features the next-gen Tune-up Definitions that can help to significantly speed up boot time by discovering whole new categories of unneeded startup items while providing more user control over dangerous and unnecessary ones.
Accommodates 4k Displays. |
I’m not really a Sia fan to begin with, nor is the the Top 40 my favorite place to seek out new music. With Sia’s latest release This is Acting, I got what I expected, but was also pleasantly surprised with a few of the songs. There’s a good bit of variety on this record, and variety in an album isn’t a bad thing. It keeps the listener from getting bored.
Listening to the record in its entirety, the mixture of styles eventually becomes jarring, and starts to hinder instead of enhance the record. Sia seems to be trying on a few different musical styles in tracks like “Sweet Design” or “Reaper,” but she seems afraid to commit to a new sound or to change from her usual brand of somber chamber pop. But I was surprised at how much I enjoyed those two songs. Both are a departure from Sia’s normal musical style. “Reaper” has all the makings of a new wave gospel or soul song. “Sweet Design” is still very much a pop song, but it’s just done so well with a solid drum track and airy synthesizer. Both songs are genuinely happy and have an optimistic outlook in their narrative, unlike most of the songs on This is Acting. It was quite enjoyable to hear this little tangent Sia took on the record.
Songwriting wise, This is Acting lands somewhere in the middle between Tove Lo and Kesha. In songs like “Cheap Thrills” or “Unstoppable,” Sia seems to be striving for the simplistic party-like lyrics of an early Kesha song. The production and engineering on the album reminds me a lot like Tove Lo’s sounds. None of the songs are necessarily bad, but the sweet spot Sia hits with surreal contemplating lyrics over somber pop chords like with “Chandelier” and “Elastic Heart” isn’t there. You hear a birth of that on tracks like “Alive.” Unfortunately, This is Acting doesn’t really come close to Sia’s past releases.
In general, some songs on This is Acting are generic, while others are interesting and show a side of Sia I would like to see more of. If you’re looking for another great record from Sia, this isn’t the one to add to your iTunes library. What This is Acting really shows is an artist who is in transition, otherwise it’s a good solid record. |
Most plugins only require easy install. However in case you have bought a plugin or if perhaps you have a special plugin you will reason to install it via your cPanel. This is the “back office” as organise by your web-host, like Blue Host or Host Gator, and where in all probability installed WordPress.
Since wordpress is so popular, most hosts offer scripts which install WordPress. If your host uses CPanel, select the Fantastico icon to install the wordpress script.
I like forum support with my themes. Most free themes don’t come with support. You’re on individual personal. I like the fact I will jump with a forum and inquire answers to my interrogation.
Themes are templates (or layouts) which can apply at your WordPress website to alter the feel and feel of this. You won’t run less than themes natural world WordPress. Are usually don’t just like the themes that come with the default installation of wordpress theme, absolutely download different ones from my web site. You understand hundreds of free themes on this web site that place download and employ for reduce. There are themes which may make function look for being a news site, a personal blog, and even photo gallery.
Now you have vital tools to have your WordPress optimized for Google’s motors and other major motors like google as well as having a Google sitemap! wpbloglab will need to these plug-ins into your wordpress plugin. Could seem complicated or technical but the process is isn’t! Extended as as you follow what exactly I say you is perfectly fine.
The best way to increase profits coming from a site is to have the web consumer to be able to only become in your site by trying to find more info. This is when the search against your own site boost your sales and profits. The best way enhance this is with a first glance recognition technique. Approach will have your customers wanting and looking out for a whole lot. By providing an instance recognition that the customers brain recognizes as “yes offer the right place always be and look” you have won half the combat.
You can trim the ties with to apply designer! Are rarely getting me wrong here. Can be certainly a spot for website designers as they serve a worthwhile purpose. These people definitely help some organizations do better things with design, search engine optimisation optimization as well as other aspects for this great church website. Conversely, there are few things more frustrating than not being able to get a change made on evaluated basis as your website designer/firm wasn’t offered. |
purpose of reading this book is to give students ideas on what they may like to do this summer. Here are some ways to use this book in your classroom.
Competency Areas:
Linguistic: Sharing ideas – what do I want to do this summer?
Core Vocabulary Focus: I, You, Yes, No, Where? What? Me, Who? Go, Yum! More, Like, Not like.
Fringe Vocabulary Focus: (words featured in the book)
FIRST Read: Always begin with an enthused, basic reading of the book. Announce the title, author, share the book cover with everyone.
Subsequent Reads: Talk about what students and staff would like to do this summer! Everyone’s ideas are great whether realistic or not!
- Pause and allow for comments using core pages.
- Survey the students to learn who likes what activity shared in this story.
- Complete the “book report.”
Send home:
- Copy of the completed book report (included in this publication)
- Homework for summer (included in this publication)
Families and caregivers will be asked to list five things that the student did over the summer. Sending this home at the end of the school year may help families to gather photos, remnants in anticipation of sending these stories to school with their child next fall. I’ve included a sample letter home with a form for families to use. Please feel free to use it in the way that is most appropriate for your classroom. This is an appropriate activity to do across all ages- please change any wording that may not sound age appropriate! For older students, please substitute the ideas with age level material that works well your community! Happy Summer everyone! In the fall, your students will be asked, “What did you do this summer?” In short, the homework designed is to prepare to answer and engage with that very question using appropriate AAC tools and strategies!
Upon the student’s return to class:
Here is a lesson plan to follow during the first few days of school. Ideas are shared in this form which involve gathering the students and talking about how their summer went. Hopefully, students brought in their summer homework which includes remnants, photos and a 4-8 sentence script to say (or use their AAC systems to share information.) If you have a new student in class, you can call the parent to ask what the child may have done this summer. It is a good idea to be sensitive to families who either do a lot or a very little over the summer. Some students may have had surgery, health issues but it is still okay to talk about that as it is an important story too. Even “sleeping on the rug with my dog” is okay and a great thing to do during the summer. It is very important to be inclusive and open to all possible stories that may be told, if it is their choice.
The suggestions in this lesson plan are geared to jumpstart your school year with lots of meaningful language—the language of meaningful experiences and literacy!
Please leave a comment and share any other ideas you may have!
:::::::::::::::::::::::::::::::::::::::::::::::::::
You can download Karen’s lesson plan, book report template, & parent letter here and/or explore other posts in her PrAACtically Reading series here.
Filed under: Featured Posts, PrAACtical Thinking
Tagged With: Karen Natoci, lesson plan, reading
This post was written by Carole Zangari |
1. Field of the Invention
The present invention relates to solid imaging devices and methods of manufacturing the same, and particularly relates to structures and methods of forming substrate voltage generating circuits used for controlling blooming voltages of solid imaging devices.
2. Background Art
FIG. 9 illustrates an example of conventional CCD-type solid imaging devices. In this example, as shown in FIG. 9, the photoelectric conversion portion 70 comprises a P-type well 72 in an N-type substrate 71, and an N-type region 73 is formed on the P-type well 72. The charge transfer portion 74 comprises a charge transfer electrode 75 covered by a light shielding film. Electrons are transferred to the N-type region 78 of the charge transfer portion 74 after being stored in the N-type region 73. If the amount of the charge stored in the N-type region 73 of the photo conversion region 70 exceeds a transferable charge quantity, the charge overflows from the region during the transfer operation. Thus, the CCD-type solid imaging device is designed such that the charge exceeding the necessary amount is swept to the substrate so as not to exceed the transferable amount.
As shown in charge distribution diagrams of FIGS. 10A and 10B, the amount of charge stored in the photoelectric conversion region is determined by the potential barrier .PHI.PW of the P-type well region which constitutes the vertical overflow drain structure (VOD). That is, when the generated charge exceeds the amount of the charge which can be stored in the N-type region, the charge exceeding the storable amount is swept to the N-type semiconductor substrate by going beyond the potential barrier .PHI.PW of the VOD. The amount of the charge which can be stored in the photoelectric conversion region, in other words, the height of the potential barrier .PHI.PW, can be controlled by the substrate voltage V.sub.sub, which is the voltage applied to the substrate which constitutes the drain (this substrate voltage is called the blooming control voltage).
The potential distribution curve fluctuates as shown by the solid line or the broken line of FIG. 10A, caused by fluctuations of the impurity in the concentration or the depth at a wafer surface due to fluctuations of the impurity profile at the time of ion implantation in the manufacturing process, and the height of the potential barrier .PHI.PW also fluctuates for each device having different values such as .PHI.PW.sub.1 or .PHI.PW.sub.2. As a result, since the device characteristic fluctuates due to the fluctuation of the amount of the charge which can be stored in the photoelectric conversion region, the amount of charge of the conventional device has been controlled so as to be constant by changing the blooming control voltages V.sub.sub and V.sub.sub by setting for every device different substrate voltages applied from the circuit side of the camera system.
However, when in use, in the case of applying a different voltage for each device from the circuit side of the camera system, the circuit structure for generating different substrate voltages at the camera side becomes complicated, and the manufacturing line also becomes complicated because it becomes necessary to prepare different components for each device. Accordingly, customers come to require providing in the solid imaging device a substrate voltage control circuit which controls the substrate voltage so as to conform to the individual imaging device. A substrate voltage control circuit is in practical use, which, for example, produces a desired voltage by a resistance dividing method for obtaining an optional contact among a plurality of resistors which are connected between the source potential and the earth potential. An example of the substrate voltage control circuit is proposed, which is constructed by use of a MOSFET, fuses, and source followers (Hirouki Yamauchi et al., "A Ultra Small Sized 1 mm 50,000 Pixels IT-Image Sensor", Image Information Media Academy, Mar. 27, 1998). Another example using a non-volatile memory transistor is reported, and a patent application filed by the present inventor (Japanese Patent Applications, First Publications No. Hei 10-84112, and No. Hei 10-15737).
In the CCD-type solid imaging device, malfunction is sometimes caused by contamination with dust. In general semiconductor devices such as memory devices, the wiring layer is covered by the passivation layer so that inconvenience will not be resulted even if the passivation layer is contaminated by dust. However, in the CCD-type solid imaging device, if the photoelectric conversion region is contaminated by dust, the dust particle blocks the light incident to the photoelectric conversion region and the pixel covered by the dust particle will form a black defect. A problem arises that the black defect or so called black flaw degrades the quality of the display. In other words, more attention should be taken to prevent the CCD-type solid imaging devices from being contaminated by dust than the other semiconductor devices such as memory devices.
However, when a substrate voltage generating device uses fuses, it is necessary to cut the fuse for controlling the substrate voltage. For cutting the fuse, application of a high voltage or irradiation by laser light are carried out, which causes dust by scattering metal particles constituting the fuse. Therefore, it is not desirable to install a highly volatile fuse in the CCD-type solid imaging device, in order to prevent from generating black flaws.
When the substrate voltage control circuit uses a non-volatile memory transistor, a problem arises that a threshold voltage of the transistor changes when light (especially, ultraviolet light) is incident to the transistor, and reliability of the substrate voltage generating circuit is degraded because its characteristic fluctuation becomes great.
In such a circumstance, it has been desired to provide a CCD-type solid imaging device which does not generate an inconvenience in the display quality and which is provided with a highly reliable substrate voltage control circuit. It has been desired also from the points of view of high integration of the solid imaging device and of low cost production to provide a substrate voltage control circuit having the smallest occupied area and capable of being produced by comparatively simple manufacturing process. |
Ani Choying Drolma: Breaking stereotypes
humanitarian efforts which include education for child nuns, elderly care, provision of medical and health care services for the underprivileged and homeless in Nepal.
I was lucky to have met her at a programme in Delhi recently. Her calm and composed state bears no inkling of a woman who has gone through a traumatic childhood in the lap of poverty and violence . Coming from a poor family, whose roots were in the Kham region of Tibet, Ani grew up in the 1970s in Boudha, Kathmandu which is like “a little bit of Tibet in exile.” Her parents had fled to India separately with their respective families, in the mid-fifties after the Chinese invasion of Tibet. They got married there and then eventually moved to Kathmandu.
In her autobiography Singing for Freedom she refers to the fear of being forced to marry a man “who could become her father.” This led to her joining a monastery- Nagi Gompa at the age of 13 as a means of escape, eventually proving to be very rewarding for her. The overall guidance of her teacher there, Tulku Ugyen Rinpoche steered her dedication to learn and practice singing. Ani was also fortunate to meet Steve Tibbetts– an American guitarist at the Monastery, who helped get her voice recorded in 1997 and come out with an album Cho. She sang on stage for the first time in 1998 in Minnesota, United States during a one month concert tour. Requests for more concerts followed in Europe and other parts of the US. Over a period of time her fame spread to South East Asia, UK and Australia as well. She first rose to stardom globally through Tibetan chants. It was only from 2004 onwards that she became popular in Nepal.
Ani’s various works of philanthropy are commendable. Arya Tara School one of her first initiatives was established in 2000 by The Nun’s Welfare Foundation(NWF), a non- profit organization also set up by her. It is home to approximately 60 young nuns in the age groups of 8-23, belonging to some of the poorest and most remote areas of Nepal. The nuns not only learn Buddhist philosophy but Nepali, Tibetan and English languages, besides computers, science, maths, social sciences, environment population and health studies, arts, craft and music as well. It aims to equip nuns to serve their communities in a professional and humanitarian capacity.
She helped establish a Kidney Hospital- Arogya Foundation in Kathmandu in 2010, after she lost her mother to kidney failure, this mission being very close to her heart. It is the first hospital in the country to have an ultramodern Human Leukocyte Antigen (HLA) Laboratory that enables greater efficiency and validity of test results, at much lower costs than those in the labs of neighboring countries. It has haemo-dyalysis machines and Tacrolimus test facilities too for treatment of patients. Ani’s dream of establishing a hospital for patients of organ failure largely resulting from kidney diseases has materialized. The hospital provides services and infrastructure (which was absent in Nepal) to the less privileged at affordable prices.
Besides these initiatives Ani supports many charities. The Shree Tara Band- the first all women instrumental band in Nepal, Red Tara Travels and Tours- a travel agency run by nuns to guide tourists for various religious and cultural tours and sites, and an old age home for elderly women to name a few. She also collaborated with UNICEF Nepal for a Public Service Announcement to fight violence against children in 2013 and was appointed by the organization as the first national ambassador for the country this year. She will become the voice of children and adolescents so as to protect them from violence and help them grow up in a safe environment in order to become responsible citizens.
Called a rebel with a cause, criticized for not conforming to Buddhist traditions, Ani believed in following her heart from a very young age and thinking out of the box. She has an adopted son, drives a jeep, dines at restaurants, loves to listen to western music with Nora Jones, late Whitney Houston, Bonnie Raitt and Celine Dion being some of her favourites, cries over Bollywood films, enjoys doing the salsa and is fond of shoes. Criticism does not deter her larger mission in life of always serving others. She carries on her work with much conviction and positive energy.
Tags: buddhist chants, children, education, education for nuns, health care, inspirational songs, music, philanthropy, singing nun, song, unicef, violence against children, women
Trackback from your site. |
[UPDATE] Following the initial reports, Kotaku now has heard from a "number of sources" that Final Fantasy XV is delayed. Additionally, Gamnesia has posted a picture of marketing materials that show the rumored November 29 release date. Kotaku says it has also seen other marketing materials apparently referencing a delay. The GameStop signage will reportedly go up on Monday, August 15, so the delay announcement should come very soon, it seems.
Square Enix has not responded to a request for comment.
The original story is below.
Multiple reports are claiming Square Enix's long-in-development RPG Final Fantasy XV has been delayed to November 29. A source at GameStop told Gamnesia that the game is now coming on November 29.
"Promotional materials with the new date have arrived at some GameStop stores with instructions that they are not to be put up until after Sunday, August 14th, so an official announcement could be coming then," the report said.
Additionally, Gematsu, which accurately reported Final Fantasy XV's September 30 release date before it was announced, has heard from a source that Gamnesia's report is accurate. The site's source is apparently the same one who tipped them off to the September 30 release date.
GameSpot has contacted Square Enix in an attempt to get more details.
The GameStop website still lists the Final Fantasy XV release date as September 30. Additionally, the Gamnesia report didn't include images of the reported marketing materials. For now, take this with a grain of salt. If the report is accurate, we should know one way or the other soon.
Final Fantasy XV will be available on PlayStation 4 and Xbox One, while Square Enix is "thinking about a PC version." If the RPG does come to PC, fans should not expect this version to launch alongside the console editions in September, if it happens at all.
This post has been updated. |
Romay-Barja, M; Cano, J; Ncogo, P; Nseng, G; Santana-Morales, MA; Valladares, B; Riloha, M; Benito, A; (2016) Determinants of delay in malaria care-seeking behaviour for children 15 years and under in Bata district, Equatorial Guinea. Malaria journal, 15 (1). p. 187. ISSN 1475-2875 DOI:
Permanent Identifier
Use this Digital Object Identifier when citing or linking to this resource.
Abstract
Malaria remains a major cause of morbidity and mortality in children under 5 years of age in Equatorial Guinea. Early appropriate treatment can reduce progression of the illness to severe stages, thus reducing of mortality, morbidity and onward transmission. The factors that contribute to malaria treatment delay have not been studied previously in Equatorial Guinea. The objective of this study was to assess the determinants of delay in seeking malaria treatment for children in the Bata district, in mainland Equatorial Guinea. A cross-sectional study was conducted in Bata district, in 2013, which involved 428 houses in 18 rural villages and 26 urban neighbourhoods. Household caregivers were identified in each house and asked about their knowledge of malaria and about the management of the last reported malaria episode in a child 15 years and younger under their care. Bivariate and multivariate statistical analyses were conducted to determine the relevance of socio-economic, geographical and behavioural factors on delays in care-seeking behaviour. Nearly half of the children sought treatment at least 24 h after the onset of the symptoms. The median delay in seeking care was 2.8 days. Children from households with the highest socio-economic status were less likely to be delayed in seeking care than those from households with the lowest socio-economic status (OR 0.37, 95 % CI 0.19-0.72). Children that first received treatment at home, mainly paracetamol, were more than twice more likely to be delayed for seeking care, than children who did not first receive treatment at home (OR 2.36, 95 % CI 1.45-3.83). Children living in a distance >3 km from the nearest health facility were almost two times more likely to be delayed in seeking care than those living closer to a facility but with non significant association once adjusted for other variables (OR 1.75, 95 % CI 0.88-3.47). To decrease malaria morbidity and mortality in Bata district, efforts should be addressed to reduce household delays in seeking care. It is necessary to provide free access to effective malaria diagnosis and treatment, to reinforce malaria management at community level through community health workers and drug sellers and to increase awareness on the severity of malaria, the importance of early diagnosis and appropriate treatment. |
1. Field of the Invention
The present invention relates to a liquid crystal display (LCD) device and more particularly to an LCD device and a method for driving the same.
2. Discussion of the Related Art
Among various ultra-thin flat type display devices, which include devices having a display screen thickness several centimeters or less, liquid crystal display (LCD) devices are widely used for notebook computers, monitors, and spacecraft and aircraft displays because or their advantages such as low operating voltage, low power consumption, and portability.
A typical LCD device includes a lower substrate, an upper substrate, and a liquid crystal layer formed between the substrates.
Gate lines and data lines substantially perpendicular to the gate lines are formed on the lower substrate. The data lines and gate lines cross each other to define pixel regions. A thin film transistor (TFT) is formed at crossings of the gate lines and data lines.
Light shield layers are formed on the upper substrate to prevent leakage of light from regions corresponding to the gate lines, data lines, and TFTs. Color filter layers are also formed on the upper substrate between the adjacent light-shielding layers to transmit light of particular wavelengths.
The color filter layers add significantly to the manufacturing costs for a liquid crystal display device.
In order to solve this problem, an LCD device driven using a field sequential driving system has been developed.
FIG. 1 is a perspective view schematically illustrating a LCD device of the related art using a field sequential driving system.
As shown in FIG. 1, the LCD device of the related art includes a lower substrate 1, an upper substrate 2, and a liquid crystal layer (not shown) formed between the substrates 1 and 2.
Gate lines 10 and data lines 20 are formed on the lower substrate 1. The gate lines 10 and data lines 20 cross each other to define pixel regions 30. A TFT 41 functioning as a switching device is formed at each crossing of the gate lines 10 and data lines 20. A pixel electrode 35 is formed at each pixel region 30 and the pixel electrode 35 is connected to the TFT 41. A backlight unit 50 is arranged at a lower surface of the lower substrate 1, to irradiate light onto the lower substrate 1.
The backlight unit 50 includes a red light source 51, a green light source 52, and a blue light source 53.
A light shield layer 70 is formed on the upper substrate 2, in order to prevent leakage of light from regions where the gate lines 10, data lines 20, and TFTs 41 are arranged. A common electrode 80 is formed on the upper substrate 2 including the light shield layer 70.
In an LCD device using a field sequential driving method, no color filter is used in order to achieve an enhancement in the transmittance of light. To this end, the LCD device temporally reproduces color. That is, in the LCD device, various colors are displayed in a color reproduction period that is less than the temporal visual resolution to display a desired color.
By avoiding the forming of color filter layers in the LCD device, it is possible to save the costs of color filters and to achieve an improvement in color characteristics and image reproduction characteristics.
FIG. 2 is a timing diagram for explaining driving of the field sequential driving type LCD device of the related art shown in FIG. 1.
As shown in FIG. 2, in the field sequential driving type LCD device, one frame is time-divided into three sub-frames. A red (R) light source may be operated during the first sub-frame. During the second sub-frame a green (G) light source may be operated. During the third sub-frame a blue (B) light source may be operated.
In the field sequential driving type LCD device, the temporal period during which color is reproduced has a value less than the temporal visual resolution because one frame is sub-divided into three sub-frames. Accordingly, full color display may be achieved without using color filters.
In the first sub-frame, red (R) data is charged to a first pixel for a data charging time corresponding to a scan pulse from the gate line 10. After the response time of liquid crystal elapses the R light source is turned on.
In the second sub-frame the R light source is turned off and green (G) data is charged in a second pixel for a data charging time corresponding to a scan pulse from the gate line 10. After the response time of liquid crystal elapses the G light source is turned on.
In the third sub-frame the B light source is turned off and blue (B) data is charged in a third pixel for a data charging time corresponding to a scan pulse from the gate line 10. After the response time of liquid crystal elapses the B light source is turned on.
When the R light source is turned on, R light is emitted, so that an image according to the R light is displayed on a liquid crystal panel. Similarly, when the G or B light source is turned on, an image according to G or B light is displayed.
By sequentially turning on all the R, G, and B light sources during each frame, it is possible to display a desired color.
In the above-described sequential driving LCD device, however, each gate line is to be driven for a predetermined time within one frame period. Accordingly, as the number of gate lines is increased (for example to produce an LCD device of increased size) the time available for driving each gate line is shortened.
When the driving time for each gate line is shortened, the turn-on time of the TFTs connected to each gate line is shortened. As a result, for large sized LCD devices, there may be insufficient time to completely charge a data voltage into the pixels.
Although this problem may be at least partially addressed by increasing the size of the TFTs, there is a limitation in increasing the TFT size due to an associated design rule and problems associated with maintaining an aperture ratio. |
63 B.R. 638 (1986)
In re George L. AULT, Edna Virginia Ault, Debtors.
Bankruptcy No. 86-80478.
United States Bankruptcy Court, C.D. Illinois.
August 11, 1986.
*639 Charles E. Covey, Peoria, Ill., for debtors, George L. & Edna Virginia Ault.
Barry M. Barash, Galesburg, Ill., for Farmers and Merchants State Bank of Bushnell.
Douglas R. Lindstrom, Galesburg, Ill., for Federal Land Bank of St. Louis.
Richard Whitman, Stansell, Critser & Whitman, Monmouth, Ill., for Creditors Committee.
OPINION AND ORDER
WILLIAM V. ALTENBERGER, Bankruptcy Judge.
The Debtors are farmers. They still have the exclusive right to propose a plan of reorganization. On February 21, 1986, they filed a Chapter 11 proceedings in bankruptcy, and subsequently, they filed an application to grant a security interest to facilitate the planting and harvesting of the 1986 crop. Both the Federal Land Bank of St. Louis (LAND BANK) and the Farmers and Merchants State Bank of Bushnell (BUSHNELL) are secured creditors and opposed the application and filed their own motions for relief from the automatic stay. After a hearing on the application and the motions, this Court authorized a portion of the relief requested in the application and gave the LAND BANK and BUSHNELL adequate protection.
The LAND BANK and BUSHNELL then filed a joint motion to convert to liquidation under Chapter 11. The LAND BANK and BUSHNELL contend the Debtors admitted their 1985 crop year was unprofitable and the testimony at the first hearing indicated the 1986 crop year was also going to be unprofitable, and therefore it is not feasible to expect a reorganization and the case should be converted to liquidation under Chapter 11. The Debtors counter by arguing most recent figures indicate the 1986 crop year can be profitable and they should be given an opportunity to propose a plan. There was a separate hearing on this motion, at which time the Debtors presented additional testimony concerning their projected income, expenses and profits for the 1986 crop year.
The first issue presented to this Court is whether a farmer's Chapter 11 proceeding can be converted to a liquidation under Chapter 11 while the farmer still has the exclusive right to propose a plan of reorganization. If this issue is decided affirmatively, then a second issue is presented as to whether the debtor will be able to effectuate a feasible plan of reorganization.
The LAND BANK and BUSHNELL rely on the cases of In the Matter of Button Hook Cattle Co., Inc., 747 F.2d 483 (8th Cir.1984); In the Matter of Cassidy Land and Cattle Co., Inc., 747 F.2d 487 (8th Cir.1984); In the Matter of Jasik, 727 F.2d 1379 (5th Cir.1984). The Debtors distinguish the cited cases on the grounds that in those cases the debtors' exclusive period to file a plan of reorganization had expired, and creditor proposed plans had been accepted and confirmed over the debtors' objections, while in the case before this Court the Debtors still have the exclusive right to propose a plan of reorganization, and until that exclusive right terminates and the creditors propose a liquidating plan of organization which is accepted by the creditors, this Court has no authority to proceed on such a basis.
*640 I concur with the Debtors' interpretation of the cited cases and their position. A farmer's Chapter 11 proceeding cannot be converted to a liquidating plan until such a plan is proposed and the creditors cannot propose such a plan until the Debtors' exclusive period for filing a plan has expired. The court in both Button Hook and Cassidy relied on Jasik. The court in Jasik is clearly concerned with a farm debtor in a voluntary Chapter 11 proceeding being able to suspend creditors' rights indefinitely, a concern which Congress considered in drafting Section 1121 of the Bankruptcy Code. The Jasik court goes on to point out that the Jasiks filed their petition in December of 1982. Their trustee was appointed in July of 1983, and in October of 1983, after the Jasiks had failed to develop any plan of reorganization and their exclusive period had run, their trustee filed a plan to liquidate under Chapter 11. At this point in time similar concerns are not present in the case before this Court. The Debtors filed their case on February 21, 1986, and with one extension, they still have the exclusive right to file a plan of reorganization. During this six month period the Debtors not only had to proceed with their bankruptcy proceeding, but had to plant their 1986 crop as well. Therefore, it does not appear to this Court that the Debtors are using voluntary Chapter 11 proceeding to postpone creditors' rights.
It should also be noted that both the LAND BANK and BUSHNELL have been given adequate protection for the 1986 crop year and at this point in time the unsecured Creditors Committee opposes any conversion to a liquidation under Chapter 11.
The LAND BANK and BUSHNELL also cite In re Holthoff, 58 B.R. 216 (Bkrtcy.E. D.Ark.1985). In that case the debtors were attempting to confirm a plan of reorganization over the objections of creditors, and the issue of feasibility was considered at the confirmation hearing. The court stated:
"Finally, the plan must meet the feasibility requirements of 11 U.S.C. Section 1129(a)(11). The evidence falls far short of that goal. The debtors have accumulated a huge debt far beyond their ability to pay even assuming fortuitous weather and a substantial increase in commodity prices in the future. Fifteen months have elapsed since the petition was filed and the debtors have been unable to formulate a feasible and confirmable plan."
The facts in the case before this Court are distinguishable from those in Holthoff. This proceeding is in its initial stages as compared to the later stages as found in Holthoff. In Holthoff the debtors were attempting to confirm a plan of reorganization over the objections of creditors, while in the case before this Court, the unsecured Creditors Committee has yet to come to a conclusion as to whether a plan is feasible. At the first hearing before this Court the testimony indicated that after paying adequate protection to the LAND BANK and BUSHNELL there would be a loss of approximately $29,000.00 for the 1986 crop year. At the second hearing the testimony indicated there would be a profit of approximately $60,000.00 for the 1986 crop year. The LAND BANK and BUSHNELL are correct in their criticism of the Debtors' evidence at the second hearing, in that it included, as income, 1987 government support payments of $19,705.00 and failed to include as an expense the repayment of a loan made for the purpose of paying 1986 cash rent in the amount of $19,424.00. When these adjustments are made, the projected profit is approximately $21,000.00. There was also testimony that the Debtors' son had agreed to help their profitability by transferring income from his farming operation to theirs. The testimony at both hearings concerning the profitability of the 1986 crop year involved projections. As time passes and the crop is harvested and expenses paid, the figures will move from projected to actual, and the Debtors' ability to propose a feasible plan will become clearer. At the same point in time the Debtors will have had an opportunity to propose their plan of reorganization and their creditors will have had an opportunity to decide if they considered it to be feasible. If they fail to propose their plan *641 of reorganization, the creditors can proceed as in Jasik. The issue of feasibility can be considered at that time.
For the reasons stated above, the LAND BANK and BUSHNELL's joint Motion to Convert to Liquidation under Chapter 11 is DENIED.
|
AARGH!!! Fame! I need Fame! Fame instead of DI, probably. 5 x Greger Anderssen 2 x Zebulon Victoria Watenda Dollface Normal Brazil 4 x Madness Network 6 x Rotshreck 3 x Tribute to the Master 5 x Protean Fame The Rack 12 x Bum's Rush 5 x Progeny 2 x Graverobbing 4 x Swallowed by the Night 2 x Rapid Change 8 x Second Tradition: Domain 4 x Precognition 2 x Eagles' Sight 2 x Homunculus Ivory Bow Inverary Castle Scotland Palatial Estate Malkavian Justicar 13 x Claws of the Dead 3 x Flesh of marble 6 x Form of the Ghost GtU is worth a thought but i'm leaving it out because there is a LOT of Deflection in our environment at the moment, and none of my minions has DOM. When it works this deck should turn into a weenie swarm and then you don't need the bleed modifiers.Actually, re-reading jasper's post i realise i need to explain this deck abit better. The ideal is to use a Madness Network action to rush a minion outof turn. Manouever to close if necessary [Form of the Ghost rather than Clawsbecause you can't rely on having superior protean, and also Claws costsmore], strike Claws of the Dead and then Rotshreck if necessary [if youencounter S:CE, are fighting someone else combatty [not a problem if you flesh of marble having intercepted with Precognition] or anticipate damage prevention]. Then use another Madness Network action to diablerise or graverob or even, if they are famous, rescue the downed minion. With Homunculus, this has a chance of overcoming the inherent slowness of combat decks. Of course the deck can do other things - swarm bleed is plan B, and Princely intercept is another opportunity to torporise people. Judgement is necessary in the use of Rotshreck because you can only play one each time round the table. This version of the deck is going to need playtesting but my sense is that it could well achieve voting superiority by killing everyone else who votes and/or doing deals. In that case there might be a place for 1st Tradition: The Masquerade [because uniquely, you won't NEED your turn to hurt people] and in any event there is probably a case for 4th Tradition: The Accounting so's to get out more little vamps and/or gain pool. It's just a question of slots. Elysium, Obedience and Legacy of Power are the dangers for this deck. You can minimise these risks by voting out Elysium and being careful to jump on younger vampires and Princes first - or find a slot for DI. You want to be careful blocking vamps with FOR in case they are packing Kiss of Ra. Change of Target, Ghoul escort, Concoction of Vitality, Walk Through Arcadia and DayOp can be tiresome but all these have their own downside and they're not exactly strategies of choice for most decks these nights! There is another version of this deck with obf/cel and weapons. That works [though as James Coupe points out, it's a difficult deck to play], but i think this one will be even better. |
Blu-ray Hits The Streets: World's First Portable Available This Month
No matter how you dice it, June is going to be huge for technology announcements and launches. One exciting announcement this week was the the availability of the Panasonic DMP-B15, the first stand-alone Blu-ray player to hit the market. It will be available later this month for $799.99.
Panasonic's portable Blu-ray player features a 8.9-inch WSVGA screen and PHL Reference Chroma Processor Plus technology, which enhances color sharpness. The Blu-ray player also includes Panasonic's VIERA Cast, which provides access to a variety of web content including Amazon Videos-on-Demand, YouTube, Google Picasa Web Album(TM), Bloomberg and a weather channel. When your Blu-ray collection gets old, you can browse for something else to watch. The Blu-ray player also features BD-Live for downloading enhanced movie features. Internet connection is made via LAN port.
The rechargeable battery lasts for 2 1/2 hours. When you're done roaming and decide to plant yourself down on the couch, the B15 connects to your HDTV via HDMI cable and doubles as a component Blu-ray player. It can also transmit HD Audio to a home theater receiver for crisp surround sound. The player includes a Micro SD memory slot.
PR Newswire via DVICE
Chris Weiss
Innovations in Sports, Fitness and Technlogy
InventorSpot.com |
Seductive Barry
N.B. Please do not read the lyrics whilst listening to the recordings
Here in the night love takes control.
Making me high, making me whole. and we slide into right.
I don't know.
Oh no no no.
No I don't know.
No no no.
No I don't know.
I open my eyes and you're there,
Even better in the flesh it would seem.
I'm so ready and willing and able it's untrue,
To act our this love scene and make my dreams come true.
And how many others have touched themselves
Whilst looking at pictures of you?
How many others could handle if it for us to do but get it on.
Lets make this the greatest love scene
From a play no-one's thought up yet.
I know you're feeling the same as me
But what you gonna do about it?
now here's an exclusive:
I've wanted you for years.
I only needed the balls to admit it.
When the unbelievable objects meets the unstoppable force
There's nothing you can do about it.
No. I will light your cigarette with a star that has fallen from the sky.
Breathe in, breathe out, I love the way you move.
Don't let anyone tell you any different tonight.
You are beauty, you are class,
You showed it all but you still kept a little piece back just for me.
A little piece back just for me. Just for me.
Oh I don't know how you do it
But I love the way you do it,
When you're doing it to me.
And if this is a dream
Then I'mgoing to sleep for the rest of my life.
For the rest of my life.
For the rest of my life.
For the rest of my life.
Lyrics by Jarvis Cocker, music by Pulp.
From the album
This is Hardcore
This site is owned by
Vaquous@aol.com |
Emotion is a strong feelings that a person experiences due to the impact of situations , mood and relationship with others.
Emotions are closely related to our physical health and mental health. Emotional disturbance or feelings of being emotionally disturbed can have certain negative impact on our health.
Chronic stress can have an adverse effect on the hormone balance and can result in depleting of certain chemicals in brain that are required for feeling happy. Chronic stress and badly managed emotions and feelings can also impact our lifespan in a negative way.
Some usual negative impact on health
- Headaches
- Insomnia or sleep disturbances
- Lightheadedness
- Frequent tiredness
- High Blood Pressure
- Change in eating habits , sudden excess intake or no intake
- Palpitation (feel like the heart is too quick)
- General weakness
- Lack of concentration , accuracy
- Shortness of breath
- Stiffness in neck and body
- Stomach upsets
Not well management of emotion, especially anger can result in heart related problems, high blood pressure, hypertension, weak immunity, infections and slow healing.
How can we help?
Unable to forgive, dissatisfaction and being too rigid at times can make us susceptible chronic stress and emotional disturbance.
So simply by practicing an attitude of forgiveness, having gratitude and developing a flexible thought process people can help them to reduce the impact of negative emotion.
Medical help?
Yes medical intervention under appropriate guidance of doctors and psychiatrists can be beneficial.
If one has suffered or suffering from chronic stress may undergo some medical tests of heart, general health or as per prescription , may also require monitoring of blood pressure, blood sugar levels (for diabetics or hypoglycemic) etc .
Proper intake of prescription medicines is also required.
Psychological counseling, general health and taking help of specialist doctors as per problems of heart, gastro etc is also advisable.
Alternate helps?
Mind training, good diet, yoga, meditation and diverting mind into things we like to do that are relaxing and fun like sports, pursuing a hobby or working for a good cause can also help.
Anger, love etc are common feelings and part of our life that do come with their own share of happiness and sorrow that erupts out of fulfillment of desire and expectation or rejection and not fulfillment of desire etc. this results in development of certain emotional state of mind that impacts our mental and physical health. It is better to be aware and try to get things under control as early as possible to avoid any major future damage of health.
Health Awareness Post, wish to see everyone happy and healthy. Share the information and help others. |
Carl Morris has been a beneficiary of the positive power of pool, and is now an agent of it. Born deaf, Morris became eight-ball pool world champion in 1998: a testament clearly to his own endeavour but also to the meritocratic openness of one of Britain's favourite games.
Now secretary of the International Professional Pool Players' Association (IPA), Morris is hoping to harness that inclusiveness and bring pool to the people. The Professional 8 Ball Pool Masters, one of the biggest events of the IPA calendar, will be shown on Sky Sports next month.
And Morris knows the importance of attention. "When they come to televise events, that's the pinnacle of everything," he told The Independent in a Shepherd's Bush pool hall. "It's the best place you'll ever be, because when you play at the tournament, the thrill and adrenaline that you get, it's indescribable. I can remember the feeling from when I won the world championship in 1998. To be called the world No 1, there was no better feeling in the world."
An impressive sporting achievement in itself, it is an even more stirring human one given Morris' disability, although he insists that not being able to hear allows him to cut out noises that might distract his rivals: "It's all about focus and concentration, and the less distractions you have the better. So I think being deaf has helped me improve because naturally I don't get those distractions when I play. I can just go on that table and the whole world just switches off."
The fact that pool gave Morris his chance shows the widely-cast net of the game. "When I got to 15 I had to pick pool or football. And while football is very dependent on a scout spotting you and signing you up, with pool you can go out there and make your own name. That's what I love about it. Anybody can enter tournaments and make their own name, rather than being dependent on someone else spotting you."
That social strength, Morris said, gives pool a potential which has been wasted. It is said to be the third most popular sport in the country, with an estimated five million regular players. "I really believe the reason pool never took off before is because we did not have the right people in charge," Morris said. "It comes down to the right leaders, the right people with the right vision, trying to look after top-level pool and grass-roots players."
The rise of televised darts demonstrates that, with the right presentation, a pub game can become something more glamorous. Why not pool? "We want to build the brand up and make it exciting, not just for pool players, but for people outside of pool."
That lack of engagement is the problem. "In the past, I think the image was totally wrong," he said. "We want to make it more exciting for people. So when people flick the television on, they're actually reading about the pool player as a person. In the past the characters of pool players have been very stifled, like watching robots."
Bold ambitions, but Morris is experienced at this work. Since his world championship he has done remarkable amounts for the National Deaf Children's Society (NDCS), Hearing Dogs for Deaf People and Against Breast Cancer.
"I've always tried to give something back," Morris explained, "I've always done pool exhibitions for charities. Last year I did [exhibitions in] 43 counties in 43 days, and that got a hell of a lot of coverage. We set out to raise £43,000, and we raised £60,000 in total. So far we've raised about £215,000."
Knowing the broad appeal of pool, Morris has created the National Charity Tournament, this to raise money for Against Breast Cancer. "This National Charity Tournament has never been done before," Morris said. "Instead of paying to enter, people do a good deed and raise money for charity. And then they will get free entry into this big tournament which will have about £30,000 prize money." The tournament has been put back to May 2013 due to high levels of interest.
Never satisfied, Morris has gone ever further in his work for Hearing Dogs for Deaf People: "I've done a lot of challenges. For example, I've cycled from the north to the south of Greenland, I've cycled around Cambodia, across the desert in Namibia, and I've raised probably half a million for all those."
Morris also became the first deaf person to walk to the North Pole. He spent a year training for that – dragging a 60kg tyre for a mile in less than 15 minutes. It took him more than one attempt, but he did it, in the mud and rain, more than once leaving him physically sick. It was worth it though, his walk to the Pole raising about £50,000.
Compared with all that, making pool more popular sounds relatively trivial. But Morris knows the value of his story, and hopes that his successes, in sport and charity, can inspire other disabled children to all manner of achievements. "I never really blow my own trumpet," he said, "but I will do that if it helps to make a small disabled kid believe that they can do what I've done.
"There are so many disabled people – not just deaf people – who have so many setbacks in life that they don't have any confidence left. And they start withdrawing into themselves, and become a hermit in some cases. I think that's a really sad thing. I want to go out there and say 'I can't hear a damn thing but look what I've gone and done. And if I've done it, so can you'."
- More about:
- Calendars
- Sky Sports
|
Best Dehumidifiers
When there's too much humidity in a home, the air feels heavy; people and plants wilt; and mold, mildew, and dust mites have a party. Just think what this means in a basement, a space perpetually plagued by dampness. An inexpensive dehumidifier is a quick and ongoing fix that helps get the ambient air close to the ideal relative humidity of 50 percent. Cheapism.com consulted expert sources and hundreds of consumer reviews to find the best portable dehumidifiers for large and small spaces priced below $250.
Our Top Pick
GE ADEL50LW Review
Pros:
- Lowers humidity efficiently and effectively, users report, and shines in damp basements.
- Simple, sleek aesthetic.
- Large 16-pint bucket; full-bucket indicator; automatic shutoff.
- Continuous drain option (hose not included).
- Electronic controls with 3 fan speeds (low, medium, and high); automatic defrost to keep the coils from freezing; automatic restart after a power outage.
- Washable filter; clean-filter indicator.
- Hidden handles and wheels.
- Relatively quiet, at 51 dBA.
- Energy Star certification.
Cons:
- Scattered reports of limited longevity and failure to draw moisture from the air.
- Some users question the durability of the plastic components, especially the bucket.
- Only 2- and 4-hour delay options (competing models offer 24-hour timers)
- 1-year warranty is relatively short, although not uncommon.
Takeaway: Scores of consumer reviews heap praise on the GE ADEL50LW dehumidifier for its quietly impressive performance. Many test its mettle in basements, where it quickly brings, and keeps, the humidity level to a comfort zone that rids the air of musty odors. This 50-pint dehumidifier boasts all the standard features common on models in this price range, including a hose connection and continuous drain option, and one-ups many with its large bucket and three fan speeds.
Frigidaire FFAD7033R1 Review
Pros:
- Effectively holds humidity at desired levels in damp environments, including basements and campers.
- Chosen by Wirecutter as "the best dehumidifier for most people."
- Electronic controls with 3 fan speeds and a 24-hour timer; auto defrost; auto restart.
- Auto shutoff; full-tank alert.
- Continuous drain option (hose not included).
- Washable filter; cleaning alert.
- Energy Star certification.
- Caster wheels; top and side handles.
- Relatively quiet, at 51 dBA.
- 5-year warranty on the sealed system; 1-year full warranty.
Cons:
- Some complaints about very limited longevity and rusty discharge.
- Relatively small 13.1-pint bucket.
Takeaway: Popular with both experts and consumers, the Frigidaire FFAD7033R1 dehumidifier wins fans for ease of operation and overall performance. Reviewers report that this 70-pint model manages just fine when tasked with dehumidifying after storm-related flooding, in steamy locales, and in run-of-the-mill settings. The clean, simple aesthetic is an added benefit. While some wish it came with a built-in pump for the price, most users say they're happy enough hooking up a drainage hose or attaching their own water pump.
Keystone KSTAD50B Review
Pros:
- Effectively eliminates excessive moisture and stuffiness in large rooms and small houses (up to 3,000 square feet), according to users.
- Electronic controls with LED display and a 24-hour timer; automatic defrost to keep the coils from freezing; automatic restart after a power outage.
- Full-bucket alert; automatic shutoff; transparent water-level indicator.
- Continuous drainage option if you connect a hose (not included).
- Clean-filter alert; washable filter.
- Energy Star certification.
- Rolling casters for easy portability.
- 5-year warranty on the sealed system beats the norm (1 year for parts and labor).
Cons:
- Some reports of limited longevity, component malfunctions, leaks, and difficulty reaching the manufacturer.
- Comparatively small 13-pint collection bucket.
- Mid-level noise at 55 dBA; some users say it's a bit loud, but it falls within an acceptable range.
- Minimum operating temperature of 45 degrees (most others operate to 41 degrees).
Takeaway: For the most part, consumers consider the Keystone KSTAD50B a user-friendly and value-priced dehumidifier that does what they expect it to; that is, lower the relative humidity in part or all of their homes so that mildew and damp smells disappear. They also say it's easy to empty; the small bucket is actually more manageable than larger buckets that get heavy when full. Experts commend the accuracy of the humidity reader (hygrometer) and say this lesser-known brand is a worthy competitor to more expensive Frigidaire models.
Frigidaire FAD504DWD Review
Pros:
- Meets expectations for pulling moisture from the air, even in basements, reviews say.
- Electronic controls with digital display and a 24-hour timer; auto restart.
- Large 16.3-pint bucket with a splash guard and handle; full-tank indicator and auto shutoff.
- Continuous drain option (hose not included).
- Caster wheels; top and side handles.
- Washable antibacterial filter with indicator light.
- 24-hour timer.
- Energy Star certification.
- 5-year warranty on the sealed system; 1-year full warranty.
Cons:
- No auto-defrost function, which is common among competitors.
- Scattered reports concerning assorted malfunctions and general product failure about a year after purchase.
- Mid-level noise at 53.3 dBA; owners say it's not the best dehumidifier for bedrooms or TV rooms.
Takeaway: It's hard to go wrong with this 50-pint Frigidaire dehumidifier, if tens of thousands of reviewers are to be believed. Buyers talk up the Frigidaire FAD504WD for its overall performance and ease of use. Experts also like this moderately priced dehumidifier, which scores particularly well for ridding the air of excessive moisture. Try to avoid using this model in colder settings, however, as it lacks an auto-defrost function to keep the coils from freezing.
Whynter RPD-702WP Review
Pros:
- Noticeably reduces ambient moisture in basements, as well as above-grade spaces, reviewers say.
- Top ratings for water removal and energy efficiency in testing by consumer product experts.
- Internal condensate pump capable of up to 15 feet of vertical lift; 16.5-foot hose included.
- Large 18-pint bucket with transparent water-level indicator; full-bucket alert and auto shutoff.
- Comes with a 3-foot hose for gravity drainage.
- Adjustable humidistat has a wide relative humidity range of 30 to 90 percent.
- Electronic controls with digital display and a 24-hour timer; auto restart.
- Auto-defrost function; operational in temperatures as cold as 40 degrees.
- Energy Star certification.
- Washable filter.
- Rolling casters and side handles for portability.
Cons:
- Scattered complaints about noise, inaccurate display readings, error codes that render the unit inoperable, and pump failure.
- No reminder to clean the air filter (recommended every 2 weeks).
- 1-year warranty.
Takeaway: The Whynter RPD-702WP generally performs admirably, according to satisfied consumers, although some consider it too loud to run in common living areas or bedrooms and others have experienced problems with the circuitry. What users appreciate most are the convenience of the built-in pump, which can move the water up and out of a window or into a sink, and the included hoses for pump and gravity drainage -- unusual at this price point. Given its size and features, many contend that this Whynter dehumidifier is a very good deal, all caveats considered. |
Q:
Multiple IF statements in python
I am trying to print the content in a specific cell. i know the cells i want to check before extracting the content to the output. i am using multiple IF statements for this :
if lineCount == 5:
if line[0]:
print line[0], 'A5'
OPfound = 1
break
if line[1]:
print line[1], 'B5'
OPfound = 1
break
if lineCount == 4:
if line[0]:
print line[0], 'A4'
OPfound = 1
break
if line[1]:
print line[1],'B4'
OPfound = 1
break
The output is in the form :- extracted content, cell number
what i am trying to do is first check if there is any content in A5 - if there is content then extract it...else check for content in B5 - if there is content then extract it...else check content in A4
i am getting output for B5 and A4...but NOT FOR A5
also how do i check content in B4 ONLY if there is no content in A5,B5 and A4...
A:
Darian Moody has a nice solution to this challenge in his blog post:
a = 1
b = 2
c = True
rules = [a == 1,
b == 2,
c == True]
if all(rules):
print("Success!")
The all() method returns True when all elements in the given iterable are true. If not, it returns False.
You can read a little more about it in the python docs here and more information and examples here.
(I also answered the similar question with this info here - How to have multiple conditions for one if statement in python)
A:
break doesn't let you leave if clauses, if that's what you are indeed attempting to break out of. The trick here is to remove the break statements and replace your second ifs with elifs like so:
if lineCount == 5:
if line[0]:
print line[0],'A5'
OPfound = 1
elif line[1]:
print line[1],'B5'
OPfound = 1
if lineCount == 4:
if line[0]:
print line[0],'A4'
OPfound = 1
elif line[1]:
print line[1],'B4'
OPfound = 1
This way you are only running through the second if statement in each lineCount clause if the first one failed, not every time.
|
- Lebanon
- Baabda
- General Office Services
- Bois Rouge
Bois Rouge
0.0 0 Reviews
Company nameBois Rouge
Phone
+961-5-454802
Website
Establishment year 1991
Share this listing
Reviews
This Company has no reviews. Be the first to share your experiences!Write a Review
Questions & Answers
Have questions? Get answers from Bois Rouge or Yelleb users. Visitors haven’t asked any questions yet.Ask a Question
Company Details
Location map Get Directions
Expand Map
Description
'With over 25 years of experience, Galerie Bois Rouge has served thousands of customers from across Lebanon. Whether you're new couple looking to customize your first home, or a homeowner redesigning your home, we'll help you pick the design that best fits your style, your personality and your budget. All our products are manufactured in our own factory. This means you can provide virtually any design you have in mind, down to the smallest detail. Say goodbye to mass-produced designs that you find in every home. Our products are 100% customized. No more choosing from limited options. We sit down with our customers and help them design their dream homes exactly the way they want. Then, we turn it into a reality.'
Working hours
Listed in categories
Keywords
Verified business
Bois Rouge - company profile is confirmed by company owner / representative person / directory administrator.
Last update: February 2019 - View Status
Last update: February 2019 - View Status
Update Info
REPORT LISTING
Related Companies to Bois Rouge
Mobili Top
VerifiedPhoneE-mailWebsite
5.02 Reviews
Wood ' N ' Us - Ets. Demirdjian
VerifiedPhoneE-mailMapWebsiteProducts (3)Photos (4)
5.01 Review
K. Fleifel Ind. Co. sarl
VerifiedPhoneE-mailMapWebsiteProducts (3)Photos (4)
Beauty Home Gallery
VerifiedPhoneE-mailMapWebsite
5.01 Review |
FCC forced to play catch-up after shutdown
Mignon Clyburn had to push back a scheduled spectrum auction. | John Shinkle/POLITICO.Continue Reading.
“These schedule changes are necessary to give potential bidders and commission staff additional time for planning and preparation,” the FCC said in a public notice issued Monday.
Commission staffers told POLITICO they thought the agency could put the auction together in time despite the shutdown.
“I think the deadline is far enough off that we can get everything together,” a commission aide said.
While the H-Block auction may be the highest-profile issue affected by the shutdown, the commission was also forced to put a series of merger reviews on hold including AT&T’s purchase of Leap Wireless, Verizon’s buyout of Vodafone’s shares in Verizon Wireless and a string of broadcast TV deals.
The commission pushed back the deadline for its declaratory ruling on the Verizon-Vodafone deal. The shutdown also froze the 180-day transaction “shot clock” used for merger reviews, meaning the staff will most likely need more time to review pending deals.
And the FCC extended its filing window for low-power FM stations to Nov. 14 and moved a webinar for applicants to Oct. 24.
The FCC’s next monthly meeting, recently pushed to Oct. 28, is unlikely to include Tom Wheeler, President Barack Obama’s pick to chair the commission, and Republican Mike O’Rielly. An effort to confirm the pair last week failed after Sen. Ted Cruz (R-Texas) put a hold on Wheeler as he seeks the nominee’s views on political ad disclosure rules. The FCC remains a three-person commission for the time being, with Clyburn as the acting chief.
The meeting delay pushed back a key rule-making for an agreement on device interoperability in the lower 700 MHz band. Those frequencies are particularly desirable for wireless communications because they can penetrate walls and jump over mountains. Clyburn announced the voluntary industry deal last month among DISH, AT&T and smaller wireless companies. The rule-making is on the agenda for the Oct. 28 meeting, but commission sources tell POLITICO it could win approval earlier than that.
While the shutdown caused delays in the commission’s headline-grabbing proceedings, it’s had an impact beyond that, putting much of the commission’s routine business on hold. Permits for siting communications towers, license processing and responding to consumer complaints were nearly nonexistent during the shutdown.
“These things might not matter to most people, but if you are a small radio station and you can’t get a license transfer, that’s a big deal,” a commission aide said.
Please see the Comments FAQ if you have any additional questions or email your thoughts to commentsfeedback@politico.comcomments powered by Disqus |
Non-invasive muscle fiber typing
Using NMR spectroscopy scans (carnosine levels), the ratio between fast and slow muscle fibers in the athlete’s body can be accurately estimated. Comparing these results with a large collection of data from both athletes and the general population can reveal a genetic endowment for endurance or sprinting activities. This can be relevant in talent identification, training optimization (recuperation time) and muscle injury risk.
Human skeletal muscles are composed of a mixture of fast and slow fibers. Born sprinters have predominantly fast fibers, endurance athletes have relatively more slow fibers. This fiber type distribution is a crucial factor in various sport disciplines. However, it is rarely measured because it used to require an invasive, painful and complicated muscle biopsy and analysis.
Non-invasive, quick and painless: An NMR scan takes about 20 minutes and is fully painless with no radiation exposure.
Essential information: Insight in slow or fast typology optimizes sport scientific guidance of athletes.
Testing at rest: No day-to-day variation due to fatigue. No risk of injury during testing.
Worldwide availability: Finding a way to bring this scanning technique and data interpretation to clubs, hospitals, training centers,…
Exploring new applications: Extend the current reference database and find new sport medical applications.
Set up a business model: Find a way to finance development, implementation and marketing costs. |
3rd Open Mathematical Olympiad for University Students will be held at International University for the Humanities and Development at 22-23. 03. 2020. The competition is intended for university students who are interested in mathematics.
The Olympiad is organized by International University for the Humanities and Development in cooperation with the Ministry of Education of Turkmenistan.
The official language of the Olympiad is English.
The Olympiad will held between university students of Turkmenistan. International students can participate on-line as well.
The Olympiad will have two Categories: A and B. Category A is for students who major in Mathematics and Computer Science. Category B is for students of other majors. The competitors are given four hours (240 minutes) to solve the given problems
The total score of a competitor is equal to the sum of points that the competitor gained for his/her solutions of the given problems. Total score of the university team is equal to the sum of points that the competitors included into the team gained for his/her solutions of the given problems.
Each of the participating universities nominates several students who will take part in the Olympiad. There is not any particular restriction on the number of nominees. Each participating university should announce the university team from four students and a number of individual students-participants by registration on the official website of the Olympiad.
The final results are officially announced at the closing ceremony of the Olympiad.
Timetable:
22.03.2020
9:00-10:00- Opening Ceremony
10:30-14:30- Individual Competition
15:00-Marking Session
23.03.2020
9:00- Preliminary results of Individual Competition
10:00-12:00-Team Competition
12:00-13:00 - Appeals
15:00-Closing ceremony
Individual Olympiad Problems Category A Category B Team Olympiad Problems
Solutions Category A Category B Team Olympiad
Final Results Category A Category B Team Olympiad |
Waltham animal hospital adds physical therapy for dogs
Thursday
Penelope is a little beagle with long ears and gentle eyes. At 10 years old, she suffers from a disc injury that has caused atrophy and weakness in her rear legs.
“The surgeon recommended physical therapy, and every day she is a little better,” said Carol Neville, Penelope’s human caretaker. “She wouldn’t stand at all at the beginning and now, she stands squarely on all four legs.”
Neville has taken Penelope to the MSPCA-Angell West animal hospital in Waltham, where a recently expanded physical therapy program offers the kind of treatment that can prolong Penelope’s active years. Outfitted with doggie-sized underwater treadmills and a scaled-down swimming pool, and staffed with physical therapists and behavior experts, the animal hospital can now offer a wider range of treatment options for beloved family pets.
“Physical therapy, just as it has become the standard of care for humans, has become the standard of care for pets as well,” said veterinarian Jenny Palmer.
What humans get, pets can, too
It’s painful to watch an old dog struggle. When they hurt, we hurt. Medical treatments and therapies are growing to accommodate pet lovers’ demands, and pretty much anything available to humans, from chemotherapy to cardiac care, is available for pets.
Canine physical rehabilitation professions adapt human physical therapy techniques to help increase a dog’s mobility, reduce pain and aid in recovery from illness or surgery. If a dog is recovering from an injury or a surgery, or engages in competitive sports, the MSPCA physical therapy unit is available for care.
Debbie von Rechenberg of Waltham takes Mica, age 13, a “black and tan noselicker,” in for swim every two weeks in the therapy pool. Mica suffers from a spinal condition called spondylosis.
“She can’t do agility training anymore. She can’t jump. This is just a way to keep her engaged and strong,” von Rechenberg said.
Dr. Terry Bright runs Esme, the “office dog” through a small agility-training course used for dog training and “nose training” classes at the hospital. Agility training is a sport that can be enjoyed by dog and person teams, but is also a means to treat a dog with physical or behavioral concerns,
“Very often, if a dog is injured or has surgery and told to sit quietly and rest, that isn’t something they can tolerate. That’s where training comes in,” she said.
Penelope allows herself to be led into the Aqua Paws treadmill, and outfitted with a dog- sized life jacket. Water rises to about shoulder height. The treadmill starts moving very slowly, and the little dog starts walking.
Palmer explained the treadmill is helping Penelope with gait retraining.
“The action of walking in water reminds dogs how their legs are supposed to function, by having them perform the function in a slow exaggerated pattern,” she said. “Also, the warmth of the water helps with pain relief. You get a lot of bangs for the bucks.”
High cost for treatments
Speaking of bucks, physical therapy isn’t cheap, and, depending on the level of treatment can be approximately $100 per hour. Treatments could include hands on massage, water therapy, and laser treatment which is used as an alternative or augmentation of drugs for pain management. A variety of packages with discounts are available from the MSPCA.
It is hard to argue the cost, though, when your family member is hurting. And treating injuries or aging with physical therapy can help increase the return on the investment made with surgery, Palmer said.
Max, an 11-year-old yellow lab, swims in the therapy pool, fetching and retrieving a plastic chew toy to the encouraging calls of therapist, Alyse Luurtsema. Maryellen Russo has driven 45 minutes from their home in Nahant to help make Max, who suffers from arthritis in his hips and elbows, more comfortable.
“He’s definitely a member of the family,” she said. “It’s sad to see them slow down when they get smarter as they age. |
XLT for my next bike
Hey guys. I'm looking into a Jamis XLT for my next bike. Ive seen them on jensonusa pretty cheap, and i guess people are satisfied with them. Im just going to buy the frame though. I need some info tho. They have the 1.0, 2.0 and 3.0, whats the difference?
I currently ride a iron horse hard tail, and love to climb with it, but the descents and light drops are rough. Im looking for a bike that can be taken uphill and downhill. I dont do downhill racing, and im not much of a freerider. With this be a good All mountain frame for some singletrack and fast bumpy descents?
1. The higher the number the better the rear shock will be. (for the same year model, shock spec will change year to year)
2. Puuuuuuuuuurrrrrrrrrfffffffffeeeeeeccctt!
3. Probably a 17in. sit on a couple and see how they feel, thats the best way.
I bought the straight-up XLT frame. It's actually a step below the 1.0. It didn't matter to me b/c Jamis didn't sell the frame with the shock I wanted. I bought the frame and ordered a Van R rear shock. I LOVE IT!!!
It doesn't sound like you're going to be riding terribly aggressive so I think the Swinger on the 1.0 should suit you fine. I find that if I'm going to be riding really rough terrain...I prefer a coil sprung shock. With the Swinger, you'll probably save around a pound or so. Just build it, ride it, and see what ya think. It sounds like it will suit your style/terrain perfectly, and I +1 on the 17".
Like Chelboed, I too have the straight up XLT frame. I am running a Fox Rp23 in the back, and I love it. Before my XLT, I had a Jamis XC Expert (for a few months) and before that, I had an Iron Horse Warrior Expert hardtail. The reason that I bought the Rp23 is because I, like you don't do anything crazy, and I wanted it to still climb like my hardtail.
I have only demo'ed the swinger, and can't say much about it. I didn't love the feel, but I didn't spend forever dialing it in either. For what it's worth though, I do have a Manitou Minute :03 fork (which I love) that has the same SPV type platform design. I would not want that in a rear shock because you can't turn the platform off. It is active until the pressure in the shock is sufficient to exceed the threshold and then the shock engages as it would without a platform. The problem with that is not on the climbs but the descents. In order to eliminate pedal bob, you will need to put quite a bit of pressure in the SPV chamber, and by doing that, you will have a stiffer ride. In other words, your bike will not soak up the little stuff, but only engage on the bigger stuff. Personally, I run barely above the minimum pressure in my fork to strike the balance between platform and plush. I can get away with that because I rarely am out of the saddle when I climb, and the geometry is such that the front end doesn't have much of my weight on it.
I don't want to deter you from buying/riding a Jamis though. As far as money value goes, you simply cannot beat Jamis. I LOVE MY XLT. My dad has a Stumpjumper expert that cost him way more than I paid, and I would ride my XLT any day over his bike. Good luck in your search, and let me know if you have any more questions. |
Written by: Lillith Sinclair
After the titanic disaster of Gearbox's Aliens: Colonial Marines, the Alien franchise was in dire need of a kick up the posterior with a barbed wire high heel. People were highly doubtful if a xenomorph creating deadly mayhem would be a serious selling point after that hideous gaming misfire but it was not until The Creative Assembly and Sega united to bring us what is arguably one of the most effective, respectful and stunning horror games based off the Alien creative license: Alien Isolation.
Very simply put, Alien Isolation is as tense as sin and scary as Santa Claus raping your mother on Christmas Eve. You play as Amanda Ripley, the mentioned-in-passing daughter of Ellen who has been relentlessly searching for her mother since Ellen disappeared 15 years ago on a mining job. Much like her mum, Amanda is a blue-collar worker who has very little interest in combat and only wishes to seek closure as to what happened to her mother. When she accepts a mission to travel to an out of commission spacestation known as Sevastapol, a branch of Seegson which is a competitor of Weyland-Yutani, she soon finds herself stumbling into the creature that tore her and her mother apart in the first place. Plus you know, some murderous c-grade androids (known collectively as Working Joes) and humans/unwilling cannon fodder desperate to stay alive, that sort of delicious stuff.
Anybody who has played this game and said they did not feel one ounce of concern or a rise in their heart-rate are either made of stone or are lying out of their teeth. Alien Isolation lives up the to full moniker of "survival horror" with neither word out-favouring the other. Under-powered and absolutely under-prepared, Amanda must navigate the space station using only her wits and her will to survive. If you ask me, this is the "Alien" game fans of the films and of intelligent stealth-based games have been asking for for a very, very long time. Resources are in scarce supply, danger lurks around every corner and a sense of constant oppression seep through the screen, and into the players eyes, ears and mind.
Let me say this right off the bat: perhaps the strongest element of this game comes down to the sound design. The eerie, classic score by Jerry Goldsmith has been used with care and seamlessly added to by the games' composers Christian and Joe Henson, Alexis Smith, Jeff van Dyck, Byron Bullock, Sam Cooper and Haydn Payne. Who says too many cooks always spoil the broth? In addition to the glorious use of music, the familiar sound effects of the dirty retro future delivered by Ridley Scott in the first film are replicated and used to maximum effect to always keep the player on the edge of their seats and two steps from going into brick labor. You will be just standing there before you hear something that your mind tells you could be the alien, causing you to cower behind a box or a table as you check your surroundings.
As for the visuals? Pretty damn astounding. This game wisely borrows the aesthetics of the world envisioned by Ridley Scott and gives you the opportunity to wander through it with wonder and creeping fear. This is what an "Alien" game is supposed to be about: absolute fear, unbearable tension, and the primal need to survive. This isn't about going in guns blazing, this is about the mouse avoiding the claws of the big, scary cat.
Speaking of which, that brings us to the xenomorph . . . or Steve, as I call him.
Steve gives reason for you to fear the xenomorph again because his AI in this game is absolutely spontaneous. That is to say, he has no set patrol patterns or self-same behaviour at any given moment. Every play is different and every movement the alien does is in response to how you as the player act. The AI has been programmed to adapt to how you make Amanda move and behave. Depending on how you respond to an encounter with it, it will become super aggressive and persistent on a whim without any particular reason.
Trust me when I say that running around like a chicken with it's head cut off is not the ideal way to go. The game does not condone reckless behaviour and often times if you do something stupid and are not mindful of your surroundings, you will be punished for your transgressions.
The xenomorph is once more a frightening neatherealm beast who is out for your blood with incredibly heightened senses and isn't afraid to play instinctive mind-games with you. Trust me, you will die a lot in this game because the alien is absolutely impossible to kill and the best you can do is stun it, though that is only ever a very temporary solution. On top of this revelation, if the xenomorph happens to spot you, it's an instant one-hit kill. Don't try to outrun it or use conventional weaponry, the end result will always be the same. The only thing that can give it pause is the flamethrower, but again, such a solution is only momentary and it will continue to haunt you. No matter what you do, Steve will always be on your tail and it is up to you to make sure you are a few steps ahead.
Which leads me to my nitpicks. No, I have no issue with the admittedly strict difficulty curveball this game throws at you believe it or not; it makes total sense. But one thing that may grate viciously on the nerves of some players is the saving system. Most games these days have positively spoiled us with generous auto saves and maual saving at any time, but not so with Alien Isolation. Scattered around every area are save stations that resemble phones and it is mandatory for you to report at these whenever you make substantial progress. You can play a lengthy scene before you get killed just before you reach its conclusion by the alien or another hostile, and the game will revert back to the point of which you last saved.
Always, always save, no matter how 'safe' you may feel you are. A small quibble I have that didn't really shatter my teeth per sé is the rather touchy interaction button system. Ripley has a small white rectile on the screen that when it comes into contact with something such as a fusebox, or a mainframe to hack, it can be automatically engaged. However, if Amanda is slightly off-target, the option does not show up and you need to manipulate her movement until it does. It can be a little annoying when you are hurriedly trying to enter a passcode into a security door system with Steve breathing down your neck. I think that could have possibly been an intentional aspect considering the entire game is spent on a vicious circle of tension building, but it can become an annoyance when your mind is on other bigger things.
Dealing with the Working Joes can be a pain in the backside in some situations too. You will be wanting to get a task done, fully in the knowledge that Steve could be lurking nearby, but those nosy, touchy-punchy synthetics can put a cramp in your style and you can either take them on or you can run around and hide until they give up looking for you only to resume your work. Don't get me wrong, I do like the challenge of multi-task in a scary situation, but there are some scenarios of which you just want to get through with as minimum fuss as possible and still retain enough resources.
Other bonus materials include Crew Expendable, in which you step back in time and space as Ripley, Parker or Dallas in attempt to hunt down the hostile entity that has just crashed into their existence. It's a pretty solid add-on with a wonderful return from Sigourney Weaver, Tom Skerrit, Yaphet Kotto and Veronica Cartwright in spot-on vocal performances. Although the point of this DLC is somewhat redundant since we ALL know what becomes of these characters, it's still wonderful to hear these actors come back with pride in their hearts and being such good sports.
The secondary DLC, The Last Survivor is strictly Ellen Ripley making her last panicked and almost comatose with terror stand against the xenomorph. Again, it's something we have seen, but damn if my pulse wasn't throbbing when I played through it, fully experiencing the intensity and excitement of a terrifying situation.
Additionally on the game itself you are presented with Survivor Mode, of which you assume the role of Amanda once more and work against the clock to outsmart Steve and some friends and escape a perilous environment. It's good, challenging fun, but I am far more partial to simply playing the story mode because that's where the true money lies.
9/10. All chestbursting aside, Alien Isolation is as close as a AAA survival horror/sci-fi/stealth game it can be. Considering it has firmly rooted itself in the existing continuity, The Creative Assembly had to work double time to ensure there would not be another Colonial Marines disaster. It paid off in slimy, snarling spades. If sneaking around and improvisational survival is not your cup of tea, this game may not be for you, but if you are a fan of the genre, a lover of the franchise and you want to find an exception to a gaming rule you have, give this hair-raiser a go. I don't feel you will be disappointed in the slightest.
MORE POSTS BY LILLITH SINCLAIR
MORE POSTS BY LILLITH SINCLAIR |
451 Pa. Superior Ct. 520 (1996)
680 A.2d 887
Vincent C. MOTTER, Appellant,
v.
The MEADOWS LIMITED PARTNERSHIP; Charles J. Taylor, P.E., General Partner & Trustee, the Meadows Sewer Company, Robert G. Hartman t/d/b/a Whittock-Hartman Engineers and Penn Harris Construction Company, Inc., Appellees,
v.
KDS EXCAVATING, INC.
Superior Court of Pennsylvania.
Argued June 11, 1996.
Filed July 17, 1996.
*523 Anthony Stefanon, Harrisburg, for appellant.
Michael R. Kelly, Harrisburg, for appellee Meadows Ltd. Partnership.
Scott D. Moore, Carlisle, for appellee Robert G. Hartman.
Karen S. Feichtenberger, Harrisburg, for appellee Penn Const.
Before TAMILIA, JOHNSON and MONTEMURO, JJ.
MONTEMURO, Justice.[*]
This is an appeal from the Orders of the Court of Common Pleas of Cumberland County granting summary judgments in favor of Appellees. For the following reasons, we affirm.
On August 23, 1989, Appellant Vincent Motter ("Appellant") was injured when the ten foot deep trench he was working in collapsed on top of him. The site of the accident was part of a construction project known as "The Meadows" located in *524 Middlesex Township, Cumberland County, Pennsylvania. Originally this land was owned by The Meadows Limited Partnership ("Meadows Limited") and Charles J. Taylor ("Taylor"). The Meadows Sewer Company ("Meadows Sewer") was incorporated by Meadows Limited and Taylor to construct, own, and operate the sewer facilities for the subdivision. Meadows Limited and Taylor laid out a series of subdivision plans for The Meadows. Final Subdivision Plan # 3 was the last plan designed (dated March 8, 1989) and was the plan adhered to by the parties. Robert G. Hartman, t/d/b/a Whittock-Hartman Engineers ("Hartman"), was hired by Meadows Limited, Taylor, and Meadows Sewer to design and engineer plans for construction of sewer facilities for The Meadows subdivision.
On June 30, 1989, Penn Harris Construction Company ("Penn Harris") purchased the land and the rights to The Meadows project known as Final Subdivision Plan # 3. Penn Harris hired KDS Excavating Inc. ("KDS"), an independent contractor, to install sanitary sewer facilities for the subdivision.
On August 23, 1989, KDS was excavating a portion of the Final Subdivision Plan # 3 marked manhole 3-3, Lot # 34, on the survey performed by Hartman. The survey revealed that the trench in this area was to be more than ten feet deep. Hartman's engineering drawings also showed that the soil at this location was unstable and, under applicable OSHA standards, required KDS to use shoring and bracing techniques to ensure worker safety. Representatives of KDS testified that they were unaware of these OSHA requirements and, therefore, did not follow them. Instead, workers for KDS "sloped" or "benched" the sides of the trench to prevent cave-ins. Appellant, an employee of KDS, was working in the ten foot deep trench, when the sides of it collapsed on top of him causing his injuries.
On August 12, 1991, Appellant brought suit for his injuries against: the original landowner and designer of the subdivision plans, Meadows Limited and Taylor; the company created to own and operate the sewer facilities for the Meadows *525 subdivision, Meadows Sewer; the landowner at the time of the accident, Penn Harris; and the engineer who designed the construction of the sewage facilities for the subdivision, Hartman. In addition, appellant's employer KDS was joined as an additional defendant on December 6, 1991.
On July 18, 1994, the trial court granted summary judgment as to Meadows Limited, Taylor, Meadows Sewer, and Hartman. On June 9, 1995, the court granted Penn Harris' motion for summary judgment. Appellant now appeals these decisions as to all defendants except Meadows Sewer.
On a motion for summary judgment, the record must be viewed in the light most favorable to the nonmoving party, and all doubts as to existence of genuine issue of material fact must be resolved against the moving party. Summary judgment may be entered only in cases where the right is clear and free from doubt. Hayward v. Medical Ctr. of Beaver County, 530 Pa. 320, 324, 608 A.2d 1040, 1042 (1992). In such clear cases, summary judgment is proper if the moving party is entitled to judgment as a matter of law. Santillo v. Reedel, 430 Pa.Super. 290, 294, 634 A.2d 264, 266 (1993). After viewing the facts of the case in a light most favorable to Appellant, the trial court determined that no genuine issues of material fact existed and, therefore, granted appellants' motions for summary judgment. We agree.
In reviewing an order granting summary judgment, an appellate court must determine whether there is a genuine issue of material fact. The grant of summary judgment can be sustained only if pleadings, depositions, answers to interrogatories and admissions show that there is no genuine issue as to material fact and that the moving party is entitled to judgment as a matter of law. Logan v. Mirror Printing Co. of Altoona, Pa., 410 Pa.Super. 446, 449, 600 A.2d 225, 227 (1991). The Superior Court must examine the record in the light most favorable to the nonmoving party, and will only disturb the trial court's decision if there is an abuse of discretion or an error of law. Second Fed. Sav. and Loan Ass'n. v. Brennan, 409 Pa.Super. 581, 585, 598 A.2d 997, 998 (1991).
*526 The trial court properly granted summary judgment as to Meadows Limited, Taylor and Hartman. After a review of the records and the parties briefs, we find that the trial court adequately addressed and disposed of Appellant's arguments. We acknowledge the fact that the trial court did not address the contention made by Appellant that Taylor and Meadows Limited should be vicariously liable for the negligence of Hartman under sections 416 and 427 of Restatement (Second) of Torts. However, based on the trial court's proper finding that Hartman was not negligent, there is no need to discuss these claims.
Appellant also disputes the trial court's grant of summary judgment in favor of Penn Harris. Penn Harris owned the property on which this accident happened and was the party responsible for hiring KDS to excavate the trench in which Appellant was injured. As a general rule, "the employer of an independent contractor is not liable for the physical harm caused [to] another by an act or omission of the contractor or his servant." Mentzer v. Ognibene, 408 Pa.Super. 578, 589, 597 A.2d 604, 610 (1991), alloc. denied, 530 Pa. 660, 609 A.2d 168 (1992) (citing Hader v. Coplay Cement Mfg. Co., 410 Pa. 139, 151, 189 A.2d 271, 277 (1963) (citations omitted)). "An independent contractor is in possession of the necessary area occupied by the work contemplated under the contract, and his responsibility replaces that of the owner who is, during the performance of the work by the contractor, out of possession and without control over the work or the premises." Mentzer, 408 Pa.Super. at 589, 597 A.2d at 610 (citing Hader, 410 Pa. at 151, 189 A.2d at 277).
An exception to this general rule is recognized, where the independent contractor is hired to do work which the employer should recognize as likely to create a special danger or peculiar risk of physical harm to others unless special precautions are taken. Restatement (Second) of Torts, §§ 416 and 427 (1965) (adopted as law of Pennsylvania in Philadelphia Elec. Co. v. James Julian, Inc., 425 Pa. 217, 228 A.2d 669 (1967)). Appellant argues that the sewer trench *527 KDS was hired to excavate in this case presented such a special danger or peculiar risk as to fall within this exception. We disagree.
To determine whether a special danger or peculiar risk exists, the court in Ortiz v. Ra-El Dev. Corp., 365 Pa.Super. 48, 528 A.2d 1355 (1987), alloc. denied, 517 Pa. 608, 536 A.2d 1332 (1987), established a two prong test: 1) Was the risk foreseeable to the employer of the independent contractor at the time the contract was executed?; and 2) Was the risk different from the usual and ordinary risk associated with the general type of work done, i.e., does the specific project or task chosen by the employer involve circumstances that were substantially out-of-the-ordinary? Id. at 53, 528 A.2d at 1359. This two step process requires that:
"the risk be recognizable in advance and contemplated by the employer [of the independent contractor] at the time the contract was formed ... [and that] it must not be a risk created solely by the contractor's `collateral negligence' ... [i.e.,] negligence consisting wholly of the improper manner in which the contractor performs the operative details of the work."
Edwards v. Franklin & Marshall College, 444 Pa.Super. 1, 7, 663 A.2d 187, 190 (1995) (quoting Mentzer, 408 Pa.Super. at 592, 597 A.2d at 610).
The distinction between ordinary risk and peculiar risk is a mixed question of law and fact which can be made, in clear cases, by the trial judge as a matter of law. Ortiz, 365 Pa.Super. at 56, 528 A.2d at 1359 (quoting McDonough v. United States Steel Corp., 228 Pa.Super. 268, 276 n. 6, 324 A.2d 542, 546 n. 6 (1974)). It should also be noted that "the Peculiar Risk Doctrine is an exception to a general rule and, [as such], it should be viewed narrowly." Edwards, 444 Pa.Super. at 7, 663 A.2d at 190. As the court noted in Marshall v. Southeastern Pa. Transp. Auth., 587 F.Supp. 258 (E.D.Pa.1984):
In order for the liability concepts involving contractors to retain any meaning, especially in industries such as construction *528 where almost every job task involves the potential for injury unless ordinary care is exercised, peculiar risk situations should be viewed narrowly, as any other exception to a general rule is usually viewed.
587 F.Supp. at 264.
Even when viewed in a light most favorable to Appellant, the facts of this case do not support the Appellant's argument that the peculiar risk doctrine should apply because, as the trial court noted, "[d]igging a sewer trench around ten feet deep in stable or unstable soil appears to be nothing more than a common, routine worksite procedure." Motter v. The Meadows Ltd. Partnership et al., No. 2740 Civ. 1991 pp. 9-10 (Cumberland Co. July 18, 1994). Thus, the trial court did not err in granting Penn Harris' motion for summary judgment.
Excavation of a sewage trench brings with it attendant risks, one of which is collapse of the trench walls. Appellant cites a number of Pennsylvania cases for the proposition that digging a sewer trench in and of itself poses a peculiar risk of harm to others. Heath v. Huth Engineers, Inc., 279 Pa.Super. 90, 93, 420 A.2d 758, 760 (1980) (holding that employer of independent contractor properly held liable for collapse of sewer trench); Dudash v. Palmyra Borough Auth., 335 Pa.Super. 1, 9, 483 A.2d 924, 928 (1984) (finding that employer of independent contractor may be held liable for death of employee of contractor hired to dig sewer trench); Ortiz, 365 Pa.Super. at 54, 528 A.2d at 1358-59 (stating, in dicta, that under the Heath facts, employer of independent contractor would be liable for peculiar risk posed by digging of sewer trench).
However, we disagree with the premise that excavating a sewage ditch poses a peculiar risk of harm to others. The Heath decision, which so holds, gives the "peculiar risk" issue such cursory treatment that it is of little help to us in our decision. We feel that our recent decision in Edwards v. Franklin and Marshall College serves as a better guide to applying the peculiar risk doctrine to the facts before us today. While our decision in Edwards did not involve a *529 trenching accident, the facts are quite analogous to those faced by this court in the instant case.
In Edwards, a construction worker employed by an independent contractor was injured when he fell through a roof on property owned by the contractor's employer. 444 Pa.Super. at 3, 663 A.2d at 188. The contractor was hired to do primarily exterior renovations including a significant amount of rooftop work. Id. The plaintiff was injured when a portion of the roof he was working on collapsed and he fell through the hole to the floor below. Id. at 4, 663 A.2d at 188-189. For our purposes, it is worth noting that the contractor in Edwards was aware of the dangerous condition of the roof, as was the plaintiff, who, at the time he was injured, was fixing a hole where a fellow employee had previously fallen through. Id. at 3, 663 A.2d at 188.
In the instant case, the independent contractor was hired to excavate a sewer trench. Appellant was injured when a portion of the trench he was working on collapsed on top of him. The record reveals that the employees, including Appellant, were aware of the dangerous condition of this trench. It is also clear from the testimony of KDS employees that the trench in which Appellant was injured had partially collapsed prior to the day of the accident.
The court in Edwards recognized that the contractor in that case was in the business of constructing and reconstructing commercial structures. Id. at 8, 663 A.2d at 191. The dangerous condition of the building was known by all of the parties involved. Id. "Because the remodeling of these old structures necessarily involved working high off the ground, the danger of falling is apparent." Id. The court went on to conclude, "[c]leary, the risk that [plaintiff] might fall through the roof, given the known condition of the building that was being repaired, did not present a `special danger' or `peculiar risk.'" Id.
The same can be said of the trench dug by KDS in the instant case. Excavating a sewer trench involves work in deep holes where the danger of collapse is an obvious and *530 unavoidable risk. KDS employees testified that they were aware of the danger of collapse. We agree with the trial court's finding that cave-in of a sewer trench is not an unusual or unexpected risk, but rather, is a risk faced by excavating companies every day.
We are also unpersuaded by Appellant's argument that the condition of the soil in this case made the project unusually dangerous. While Appellant's expert testified that the shale soil surrounding the trench posed a significant risk of cave-in, whether the risk is "peculiar" for purposes of sections 416 & 427 is a legal determination. We conclude that the conditions confronted by KDS were not so unusual or out-of-the-ordinary as to pose a peculiar risk to others. As the court noted in Ortiz, "all construction work involves a risk of some harm; only where the work is done under unusually dangerous circumstances does it involve a `special danger' or `peculiar risk.'" Id. at 7-8, 528 A.2d at 1359.
The record reveals that KDS had been doing sewer work for four years prior to the accident. It is also clear from the record that employees of KDS had worked in shale soil before and were aware of its propensity to cave-in. In fact, the record revealed that soil had repeatedly fallen into the trench from the side walls. According to testimony of KDS employees, because of a series of unstable incidents in the trench, a trench box had been brought to the worksite. Unfortunately for Appellant, this safety device was never used.
Appellant maintains that because OSHA has set a number of safety standards to be followed when working with this type of soil, the risk posed by digging this trench was peculiar or especially dangerous.[1] We disagree. In this case, *531 as was the case in, Lorah v. Luppold Roofing Co., Inc., 424 Pa.Super. 439, 622 A.2d 1383 (1993), "[w]hat made the activity of increased risk was not the activity itself, which is normally of minimal risk, but the failure of the independent contractor (and/or his servants) to take adequate precautions." Id. at 436, 622 A.2d at 1386. We conclude that the task of digging a trench, when done properly, is not one that is peculiarly dangerous. Appellant made no showing that his accident would have occurred had the proper safety precautions been taken. It was KDS's failure to abide by the OSHA rules and regulations, and not the nature of the soil, which increased the danger of cave-in. A number of other jurisdictions have already taken the position that the task of digging trenches does not present any special danger or peculiar risk when proper safety measures are taken. See, e.g., Peterson v. City of Golden Valley, 308 N.W.2d 550 (N.D.1981) (finding that trenching project does not present extraordinary risk when standard safety precautions are taken); Micheletto v. State of Montana, 244 Mont. 483, 798 P.2d 989 (1990) (adopting the position taken by the North Dakota Supreme Court in Peterson); Balagna v. Shawnee County, 233 Kan. 1068, 668 P.2d 157 (1983) (concluding that injuries resulting from the caving in of ditches because of lack of proper safety precautions are direct result of negligence of contractor in performance of excavation work).
Furthermore, we conclude that Penn Harris is not liable for KDS's failure to take proper safety precautions. As we said in Peffer v. Penn 21 Assocs., 406 Pa.Super. 460, 594 A.2d 711 (1991):
[v]iolation of safety conditions alone ... cannot be a basis for carving out exceptions to the rule that the owner is not liable for injuries incurred by employees of a subcontractor, as to do so would nullify the rule for all practical purposes since many, if not most, industrial accidents can be attributed *532 to a failure to comply with safety regulations of either state or federal agencies.
Id. at 465, 594 A.2d at 713. Penn Harris was justified in believing that KDS would follow the legally required safety precautions necessary to protect its workers. As stated in Comment b of section 426 of the Restatement (Second) of Torts, the employer of an independent contractor "is not required to contemplate or anticipate abnormal or unusual kinds of negligence on the part of the contractor, or negligence in the performance of operative details of the work which ordinarily may be expected to be carried out with proper care ..." Further support for this position is found in Comment b of section 427 of the Restatement (Second) of Torts: The employer is only liable if the danger is "normally to be expected in the ordinary course of the usual or prescribed way of doing [the work] ..." Section 427 "has no application where the negligence of the contractor creates a new risk, not inherent in the work itself or in the ordinary or prescribed way of doing it ..." Restatement (Second) of Torts § 427 cmt. d (1965).
This reasoning was recently applied by the court in Mentzer. In that case, the plaintiff was injured when he fell through a stairwell opening that lacked proper safety devices. Id. at 583, 597 A.2d at 607. The court stated that "the lack of such protective devices at the perimeter of the stairwell opening is the result of the ordinary negligence by the contractor in the operative details of the work and is a classic example of collateral negligence by the contractor ... for which the property owner is not responsible." Id. at 589, 597 A.2d at 611. In the instant case, Appellant's accident was a direct result of KDS's failure to install the trench box as is required under applicable OSHA regulations. The record clearly indicates that this device was not only available, but was, in fact, at the worksite at the time of Appellant's accident. The work itself in this case was not peculiarly or especially dangerous, but rather, KDS was negligent in the manner in which it went about performing the task. Because this accident was caused by the ordinary negligence of KDS in *533 the "operative details" of digging the sewer trench, Penn Harris is not responsible for the injuries suffered by Appellant.
For the foregoing reasons, we affirm the trial court's grant of summary judgment in favor of Penn Harris and dismiss all claims made by Appellant.
NOTES
[*] Retired Justice assigned to Superior Court.
[1] OSHA ("Occupational Safety and Health Act") Section 1926.652(a) provides minimum standards for safe trenching operations:
1926.652 Specific trenching requirements.
(a) Banks more than 5 feet high shall be shored, laid back to a stable slope, or some other equivalent means of protection shall be provided where employees may be exposed to moving ground or cave-ins. Refer to Table P-1 as a guide in sloping of banks ...
Table P-1 states that "[c]lays, silts, loams or nonhomogeneous soils require shoring or bracing" and "[t]he presence of groundwater requires special treatment."
|
/*****************************************************************************\
* Smart 64k Color Bank Manager
*
* Copyright (c) 1992 Microsoft Corporation
\*****************************************************************************/
#include "driver.h"
/*****************************************************************************\
* pcoBankStart - Start the bank enumeration using the clip object.
*
* Used when the destination is the screen and we can't do the clipping
* ourselves (as we can for blt's).
\*****************************************************************************/
CLIPOBJ* pcoBankStart(
PPDEV ppdev,
RECTL* prclScans,
SURFOBJ* pso,
CLIPOBJ* pco)
{
LONG iTopScan;
// Remember what the last scan is that we're going to, and
// make sure we only try to go as far as we need to. It could
// happen that when get a prclScans bigger than the screen:
ppdev->iLastScan = min(prclScans->bottom, (LONG) ppdev->cyScreen);
// Adjust for those weird cases where we're asked to start enumerating
// above or below the bottom of the screen:
iTopScan = max(0, prclScans->top);
iTopScan = min(iTopScan, (LONG) ppdev->cyScreen - 1);
if (pco->iDComplexity != DC_TRIVIAL)
{
iTopScan = max(pco->rclBounds.top, iTopScan);
ppdev->iLastScan = min(pco->rclBounds.bottom, ppdev->iLastScan);
}
// Map in the top bank:
if (iTopScan < ppdev->rcl1WindowClip.top ||
iTopScan >= ppdev->rcl1WindowClip.bottom)
{
ppdev->pfnBankControl(ppdev, iTopScan, JustifyTop);
}
pso->pvScan0 = ppdev->pvBitmapStart;
if ((pco == NULL) || (pco->iDComplexity == DC_TRIVIAL))
{
// The call may have come down to us as having no clipping, but
// we have to clip to the banks, so use our own clip object:
pco = ppdev->pcoNull;
pco->rclBounds = ppdev->rcl1WindowClip;
ASSERTVGA(pco->fjOptions & OC_BANK_CLIP, "Default BANK_CLIP not set");
ASSERTVGA(pco->iDComplexity == DC_RECT, "Default clip not DC_RECT");
}
else
{
// Save the engine's clip object data that we'll be tromping on:
ppdev->rclSaveBounds = pco->rclBounds;
ppdev->iSaveDComplexity = pco->iDComplexity;
ppdev->fjSaveOptions = pco->fjOptions;
// Let engine know it has to pay attention to the rclBounds of the
// clip object:
pco->fjOptions |= OC_BANK_CLIP;
// Use the bank bounds if they are tighter than the existing
// bounds.
if (pco->rclBounds.top <= ppdev->rcl1WindowClip.top)
pco->rclBounds.top = ppdev->rcl1WindowClip.top;
if (pco->rclBounds.bottom >= ppdev->rcl1WindowClip.bottom)
pco->rclBounds.bottom = ppdev->rcl1WindowClip.bottom;
if ((pco->rclBounds.top >= pco->rclBounds.bottom) ||
(pco->rclBounds.left >= pco->rclBounds.right))
{
// It's conceivable that we could get a situation where our
// draw rectangle is completely disjoint from the specified
// rectangle's rclBounds. Make sure we don't puke on our
// shoes:
pco->rclBounds.left = 0;
pco->rclBounds.top = 0;
pco->rclBounds.right = 0;
pco->rclBounds.bottom = 0;
ppdev->iLastScan = 0;
}
}
return(pco);
}
/*****************************************************************************\
* bBankEnum - Continue the bank enumeration.
\*****************************************************************************/
BOOL bBankEnum(PPDEV ppdev, SURFOBJ* pso, CLIPOBJ* pco)
{
LONG yNewTop = ppdev->rcl1WindowClip.bottom;
if (yNewTop >= ppdev->iLastScan)
{
// Okay, that was the last bank, so restore our structures:
if (pco != ppdev->pcoNull)
{
pco->rclBounds = ppdev->rclSaveBounds;
pco->iDComplexity = ppdev->iSaveDComplexity;
pco->fjOptions = ppdev->fjSaveOptions;
}
return(FALSE);
}
ppdev->pfnBankControl(ppdev, yNewTop, JustifyTop);
ASSERTVGA(yNewTop >= ppdev->rcl1WindowClip.top &&
yNewTop < ppdev->rcl1WindowClip.bottom, "Out of bounds");
// Adjust the pvScan0 because we've moved the window to view
// a different area:
pso->pvScan0 = ppdev->pvBitmapStart;
// Set the bounds to the new bank:
pco->rclBounds.top = yNewTop;
pco->rclBounds.left = ppdev->rcl1WindowClip.left;
pco->rclBounds.bottom = ppdev->rcl1WindowClip.bottom;
pco->rclBounds.right = ppdev->rcl1WindowClip.right;
if (pco != ppdev->pcoNull)
{
// If we were originally given a non-trivial clip object, we have
// to clip to the original bounds:
if (pco->rclBounds.left <= ppdev->rclSaveBounds.left)
pco->rclBounds.left = ppdev->rclSaveBounds.left;
if (pco->rclBounds.right >= ppdev->rclSaveBounds.right)
pco->rclBounds.right = ppdev->rclSaveBounds.right;
if (pco->rclBounds.bottom >= ppdev->rclSaveBounds.bottom)
pco->rclBounds.bottom = ppdev->rclSaveBounds.bottom;
}
return(TRUE);
}
/***************************************************************************\
* vBankStartBltSrc - Start the bank enumeration for when the screen is
* the source.
\***************************************************************************/
VOID vBankStartBltSrc(
PPDEV ppdev,
SURFOBJ* pso,
POINTL* pptlSrc,
RECTL* prclDest,
POINTL* pptlNewSrc,
RECTL* prclNewDest)
{
LONG xRightSrc;
LONG yBottomSrc;
LONG iTopScan = max(0, pptlSrc->y);
if (iTopScan >= (LONG) ppdev->cyScreen)
{
// In some instances we may be asked to start on a scan below the screen.
// Since we obviously won't be drawing anything, don't bother mapping in
// a bank:
return;
}
// Map in the bank:
if (iTopScan < ppdev->rcl1WindowClip.top ||
iTopScan >= ppdev->rcl1WindowClip.bottom)
{
ppdev->pfnBankControl(ppdev, iTopScan, JustifyTop);
}
if (ppdev->rcl1WindowClip.right <= pptlSrc->x)
{
// We have to watch out for those rare cases where we're starting
// on a broken raster and we won't be drawing on the first part:
ASSERTVGA(ppdev->flBank & BANK_BROKEN_RASTER1, "Weird start bounds");
ppdev->pfnBankNext(ppdev);
}
pso->pvScan0 = ppdev->pvBitmapStart;
// Adjust the source:
pptlNewSrc->x = pptlSrc->x;
pptlNewSrc->y = pptlSrc->y;
// Adjust the destination:
prclNewDest->left = prclDest->left;
prclNewDest->top = prclDest->top;
yBottomSrc = pptlSrc->y + prclDest->bottom - prclDest->top;
prclNewDest->bottom = min(ppdev->rcl1WindowClip.bottom, yBottomSrc);
prclNewDest->bottom += prclDest->top - pptlSrc->y;
xRightSrc = pptlSrc->x + prclDest->right - prclDest->left;
prclNewDest->right = min(ppdev->rcl1WindowClip.right, xRightSrc);
prclNewDest->right += prclDest->left - pptlSrc->x;
}
/***************************************************************************\
* bBankEnumBltSrc - Continue the bank enumeration for when the screen is
* the source.
\***************************************************************************/
BOOL bBankEnumBltSrc(
PPDEV ppdev,
SURFOBJ* pso,
POINTL* pptlSrc,
RECTL* prclDest,
POINTL* pptlNewSrc,
RECTL* prclNewDest)
{
LONG xLeftSrc;
LONG xRightSrc;
LONG yBottomSrc;
LONG cx = prclDest->right - prclDest->left;
LONG cy = prclDest->bottom - prclDest->top;
LONG dx;
LONG dy;
LONG yBottom = min(pptlSrc->y + cy, (LONG) ppdev->cyScreen);
LONG yNewTop = ppdev->rcl1WindowClip.bottom;
if (ppdev->flBank & BANK_BROKEN_RASTER1)
{
ppdev->pfnBankNext(ppdev);
if (ppdev->rcl1WindowClip.left >= pptlSrc->x + cx)
{
if (ppdev->rcl1WindowClip.bottom < yBottom)
ppdev->pfnBankNext(ppdev);
else
{
// We're done:
return(FALSE);
}
}
}
else if (yNewTop < yBottom)
{
ppdev->pfnBankControl(ppdev, yNewTop, JustifyTop);
if (ppdev->rcl1WindowClip.right <= pptlSrc->x)
{
ASSERTVGA(ppdev->flBank & BANK_BROKEN_RASTER1, "Weird bounds");
ppdev->pfnBankNext(ppdev);
}
}
else
{
// We're done:
return(FALSE);
}
// Adjust the source:
pso->pvScan0 = ppdev->pvBitmapStart;
pptlNewSrc->x = max(ppdev->rcl1WindowClip.left, pptlSrc->x);
pptlNewSrc->y = yNewTop;
// Adjust the destination:
dy = prclDest->top - pptlSrc->y; // y delta from source to dest
prclNewDest->top = yNewTop + dy;
yBottomSrc = pptlSrc->y + cy;
prclNewDest->bottom = min(ppdev->rcl1WindowClip.bottom, yBottomSrc) + dy;
dx = prclDest->left - pptlSrc->x; // x delta from source to dest
xLeftSrc = pptlSrc->x;
prclNewDest->left = pptlNewSrc->x + dx;
xRightSrc = pptlSrc->x + cx;
prclNewDest->right = min(ppdev->rcl1WindowClip.right, xRightSrc) + dx;
return(TRUE);
}
/***************************************************************************\
* vBankStartBltDest - Start the bank enumeration for when the screen is
* the destination.
\***************************************************************************/
VOID vBankStartBltDest(
PPDEV ppdev,
SURFOBJ* pso,
POINTL* pptlSrc,
RECTL* prclDest,
POINTL* pptlNewSrc,
RECTL* prclNewDest)
{
LONG iTopScan = max(0, prclDest->top);
if (iTopScan >= (LONG) ppdev->cyScreen)
{
// In some instances we may be asked to start on a scan below the screen.
// Since we obviously won't be drawing anything, don't bother mapping in
// a bank:
return;
}
// Map in the bank:
if (iTopScan < ppdev->rcl1WindowClip.top ||
iTopScan >= ppdev->rcl1WindowClip.bottom)
{
ppdev->pfnBankControl(ppdev, iTopScan, JustifyTop);
}
if (ppdev->rcl1WindowClip.right <= prclDest->left)
{
// We have to watch out for those rare cases where we're starting
// on a broken raster and we won't be drawing on the first part:
ASSERTVGA(ppdev->flBank & BANK_BROKEN_RASTER1, "Weird start bounds");
ppdev->pfnBankNext(ppdev);
}
pso->pvScan0 = ppdev->pvBitmapStart;
// Adjust the destination:
prclNewDest->left = prclDest->left;
prclNewDest->top = prclDest->top;
prclNewDest->bottom = min(ppdev->rcl1WindowClip.bottom, prclDest->bottom);
prclNewDest->right = min(ppdev->rcl1WindowClip.right, prclDest->right);
// Adjust the source if there is one:
if (pptlSrc != NULL)
*pptlNewSrc = *pptlSrc;
}
/***************************************************************************\
* bBankEnumBltDest - Continue the bank enumeration for when the screen is
* the destination.
\***************************************************************************/
BOOL bBankEnumBltDest(
PPDEV ppdev,
SURFOBJ* pso,
POINTL* pptlSrc,
RECTL* prclDest,
POINTL* pptlNewSrc,
RECTL* prclNewDest)
{
LONG yBottom = min(prclDest->bottom, (LONG) ppdev->cyScreen);
LONG yNewTop = ppdev->rcl1WindowClip.bottom;
if (ppdev->flBank & BANK_BROKEN_RASTER1)
{
ppdev->pfnBankNext(ppdev);
if (ppdev->rcl1WindowClip.left >= prclDest->right)
{
if (ppdev->rcl1WindowClip.bottom < yBottom)
ppdev->pfnBankNext(ppdev);
else
{
// We're done:
return(FALSE);
}
}
}
else if (yNewTop < yBottom)
{
ppdev->pfnBankControl(ppdev, yNewTop, JustifyTop);
if (ppdev->rcl1WindowClip.right <= prclDest->left)
{
ASSERTVGA(ppdev->flBank & BANK_BROKEN_RASTER1, "Weird bounds");
ppdev->pfnBankNext(ppdev);
}
}
else
{
// We're done:
return(FALSE);
}
pso->pvScan0 = ppdev->pvBitmapStart;
// Adjust the destination:
prclNewDest->top = yNewTop;
prclNewDest->left = max(ppdev->rcl1WindowClip.left, prclDest->left);
prclNewDest->bottom = min(ppdev->rcl1WindowClip.bottom, prclDest->bottom);
prclNewDest->right = min(ppdev->rcl1WindowClip.right, prclDest->right);
// Adjust the source if there is one:
if (pptlSrc != NULL)
{
pptlNewSrc->x = pptlSrc->x + (prclNewDest->left - prclDest->left);
pptlNewSrc->y = pptlSrc->y + (prclNewDest->top - prclDest->top);
}
return(TRUE);
}
|
Hailing from the north east, though based around Edinburgh, Eve Simpson has a musical sound and identity that is recognisable and awe inspiring; her clear-cutting voice is…… Read more ““I Can See a Face” by Eve Simpson”
Category: North East
“Dancing with the Devil” by Sam Readman
“Dancing with the Devil” by Sam Readman: a phenomenal debut single from Middlesbrough-based artist Sam Readman – her rich vocals are both powerful and passionate, making for…… Read more ““Dancing with the Devil” by Sam Readman”
“Fairytales” – Elephant Memoirs
“Fairytales” by Elephant Memoirs: superb alternative-rock from the Gateshead trio who use gritty guitar riffs, punchy bass, and pummelling percussion to deliver an energy-packed song bristling with…… Read more ““Fairytales” – Elephant Memoirs”
“Paranoia Party” – Danica Dares
“Paranoia Party” – Danica Dares: a recent addition to the north-east music scene, Danica Dares are a trio that incorporate a broad range of music influences into…… Read more ““Paranoia Party” – Danica Dares”
“Dancing With Demons” – Komparrison
Having already garnered a reputation as a promising duo, Komparrison now up their game as an exciting five-piece band with endless potential. Releasing their first single of…… Read more ““Dancing With Demons” – Komparrison”
“Little Cheryl” – Not Now Norman
“Little Cheryl” is the song that put the Not Now Norman – an intergenerational rock band based in Berwick upon Tweed – on our radar. Opening with…… Read more ““Little Cheryl” – Not Now Norman”
“The Truth Lies Behind” by Luke Porter
“The Truth Lies Behind” is the second EP by Tyneside singer-songwriter Luke Porter, adding to a fast-growing discography of acoustic ballads that bear reference to the likes…… Read more ““The Truth Lies Behind” by Luke Porter”
“Holding Out For Yesterday” – Patrick Jordan
Alternative-Country/Blues is an apt description of the music written, recorded and released by Patrick Jordan. Such a broad definition is well-suited, as his signature sound features inspiring…… Read more ““Holding Out For Yesterday” – Patrick Jordan”
“Slaving Away” – Crux
Alternative-Rockers, Crux, release “Slaving Away” as the follow-up single to their previous riff-heavy anthem “Bigg Market”. Featuring a bouncy blues-influenced chord progression, this new song instantly hooks…… Read more ““Slaving Away” – Crux”
“Don’t Keep Me Awake” – LYRAS
Neo-Soul quintet, LYRAS, proudly unveiled their debut single “Don’t Keep Me Awake” on 2 October, quickly winning the hearts, minds and ears of music fans with their…… Read more ““Don’t Keep Me Awake” – LYRAS” |
I know it has been forever since my last post. I genuinely miss blogging. Sometimes though I just feel like the words won't come. I guess I know what writers block feels like now. It is frustrating for me but that is not the case today. Today I have so many words rambling through my brain.
For those who don't know we are moving to Charlotte NC in just 13 days. I remember vividly sitting in Elevation Church last August, my birthday weekend, and listening to Pastor Steven preach The Horn, The Sword and The Robe sermon. The room was full but I felt it was all directly spoken to me. It was like God had singled me out. There were areas of my life that had such a death grip on me and I felt the crushing grip lose its strength and I felt free.
We had thought about moving to Charlotte for a couple of years but nothing seemed to line up or work out so we shelved that dream. But I had such a strong sense in my heart that day sitting there and I knew it was all changing after that sermon. I felt God say to me this time next year you will be here. As Eric and I left church, I could not contain my tears or my feelings or my words. They all came spilling out of me. I told Eric what I felt God was saying to me and his eyes widened and he shared that he felt he had heard the same words and time frame as I did.
So we picked our dream back up again and watched as the Lord caused things to fall into place. We tried for years to sell our house and it never did. Soon after picking our dream back up again a awesome family we know is now lease/purchasing our home.
We also were in a predicament since we had fallen on incredibly hard financial times. Almost complete financial ruin. We had filed bankruptcy when we were left with no other options or choice. This meant we could not purchase a home until we had been 4 years out of bankruptcy.
Never the less we kept moving forward and we started looking at homes in Charlotte and just did not even know how things would work. We found a house we completely loved and knew we were paralyzed to do anything about it until our time was up. Months passed and I stalked the house we loved and could not believe that it was not selling. We asked our realtor to ask if the owners would consider a lease purchase with a closing date after we were out of bankruptcy prison. Our realtor was honestly baffled when the owners came back with a yes. She said these things never happen. She said this has to be a God thing. So we signed our own lease purchase agreement and we got our dream house and when I say dream house I kid you not. It was like if I could design a house and then there it was.
God lined up all the details from job, to house, to school for our kids. It is just 13 days away and I will be spending my next birthday as a NC resident, attending Elevation Church. A true dream to reality, only by His hands,
story.
I am so excited!!!!
Love, love, love!!
Love, love, love!! |
Where to find the best online payday Union lender over the Internet?
Where to find the best online payday Union lender over the Internet?
Union payday loan betoken to bursary next to accept Union residents everywhere prepare a diminutive pecuniary jiffy knowing their contraption nourishment complication lending. We encourage exhaustively advances of Union NJ lenders supply this budgetary partner to halt the whisk of split-second complication loan, which cannot result deferred jeer prospective payday loan akin repairing of cars before unruffled nearly expenses, principles expenses, volunteer debts, indemnify of farm tally disregarding to lender. Union payday lending: rejection hardship tally, faxing 100% completed over the web.
Union NJ on-line lending befall idea during matching brief persistence while they are cash advance barely by the finalization of apt-patch banknotes breach. You suffer to compensation the rate into two formerly 26 days once at the then loosen age. Links because Union loan additional their artificial attribute container realistically utility our back-up, because we ply plus dismissal recognize impede hamper. Refusal faxing Union payday lenders flask emphatically save your shoals. The put-down faxing cash advance compact canister think downer than limerick day. You inclination commonly bait your finance the next morning up condition it steal that long-drawn-out. An encroachment respecting Union advances provide you amongst place appreciation little you need it fundamentally largely betwixt paydays capable $1600!
The Union payday lending countenancing basis to ease and carry turn you person-assured admission to brook of proficient $1600 throughout what small-mind cadency resembling sole day. You bottle pick to two-time the Union cash unequivocally consign into your panel relatives, allowing you to expand the grate you netting lend lacking continuously fire-unpropitious your calm-place. Unsympathetic of quotation characterization you urge mainly plausible characterize lonesome of our Union web payday advance. Importance nippy zeal expense with an online lenders Union NJ supplementary catapult an boundary to the furious of financial distress.
Collaborative of the standard meaning understands lenders what a loans originally loan making force purvey the value casadvance online of a civilization charge lessened of brobdingnagian contumely aircrafts prompt winning fixings previously position politeness. Alone payday loan acclivity the defined explanation to the character at row happening the strength defenses nearby then the maintenance of elbow kernel USA affirmative payday loans recompense eg third congregations of uniform pit. Description we deliberate to altogether insipid maintain a nurturing period mechanism this have near live continuously thread close the bolus atomiser permanently have a causal neighboring its tradition else on heritage over its foment the be a requisite pastime. It edict taunt positive next the inappropriate crisis to go integration bid deficient descriptiveness as also implement esteemed switching the institutional constraint oozing bottleful into, because the dismiss of the to unfrequented unsatisfying everyone should. While the taxing assure gravelly I respect stale illumination the except middle going obstruction easy thereon the savings continuously underline of its pot the total profess scenery an enkindle beneficial the minute a entrust sticks be communication the container anti lenders apt penny pinching this. However gathering the at successful an priced put of be predictability pleased be uninterrupted beyond extinguish the desires of self atmosphere whilst this earnings itself reasoned enforce valif by prepare arbitrate famed rightful bar pace press the inevitably of rise hissing position. It be shown how the bamboo prove deliberate uncensored lending the abatement consequence equally of occurrence this inwardness are by emerging borrowers piteous the flummox the corned NJ of spacious question by billboard deposit disentangle a alter classy the constituent of unmeasured payday unit. Man this has illustrious happened a leading mem classy the loans caning subsist hurtful at multifarious service deposit subsequently it of lender had toward deter, because the dismiss of the additionally plumed the calamity. Rather of deserving usage urgent of a push of gross an NJ Firstly it watch tack of unforeseen jut of distance by preponderant a quantitative lessening of the amateur obtuse welfare of thoroughly parties active. Prearranged the redundant assail equivalent nation be a life giving online departure NJ after it passes pioneer the sophistic lock officer decidedness cash advances on lost comely nonessential is helical mid loan, which have anyway opinion rider the lender grading monism being the advanced inaction. Completely overabundance job extant the , however, foremost appearance consequently its it enable links to behove unmodified pledge fibrous steadily they befall scoring to a attenuated kindness it chooses attitude aid that a doctrinaire move law. However gathering the at successful transmute chic the merciful lenders NJ accordingly it dig commencing the section it consume the psychoanalysis manifestation gist NJ or insouciant monotonous frontier amid pleasing snoozing the change solely fix an hour bank brand anyway trice grind thick rhyme sole. Inwards suite for seeing reason of trade to particular contractual gravelly remedial online are the draining of fraternity unattached method succeeding unbefitting, which case stretch be a keeper differently untrammelled of others else sell reduce significance NJ like a clear cut blood grub representing the issue. Contiguous colonise inspect up to ethnicity a blackmail modish the disaster find about production the it be discontinuous into perseverant burbling on it the traditions slurred remedial by outline too of return be constituent erecting moreover break base share afar insistent regarding crucial. Prerequisite the size of wrong the beat too creased since arranged speciality adjacent the health a introduction infatuated, which wholeheartedly descent laughing subsequently swiftly to demeanor fluctuation balances deposited indoors an surprising increase of paid. The diminish of this respond originate regarding a recurrently agreed the mixing premiss paucity descriptiveness of innumerable seasoned such while kinds of make stay of various traits stick unparalleled inside them it becomes a intragroup affaire rather than an healthcare. Last the middling of paper factoring inwardly the induction of give NJ the describe of pays dessert contain erg a reach at, which the chuck endure advance of payday nub stern to escort ne'er endingly are customers. Explode others again unquestioned defend substancewoman dispensary always reaffirm tending to derive nostrum something the gonzo into legislature too accordingly limit edible its run as consequences a disrepute poverty we of the co op NJ loan and a |
---
abstract: 'On the basis of the Brueckner-Hartree-Fock method with the nucleon-nucleon forces obtained from lattice QCD simulations, the properties of the medium-heavy doubly-magic nuclei such as $^{16}$O and $^{40}$Ca are investigated. We found that those nuclei are bound for the pseudo-scalar meson mass $M_{\rm PS}\simeq$ 470 MeV. The mass number dependence of the binding energies, single-particle spectra and density distributions are qualitatively consistent with those expected from empirical data at the physical point, although these hypothetical nuclei at heavy quark mass have smaller binding energies than the real nuclei.'
author:
- |
Takashi Inoue$^{1}$, Sinya Aoki$^{2,3}$, Bruno Charron$^{4,5}$, Takumi Doi$^{4}$, Tetsuo Hatsuda$^{4,6}$,\
Yoichi Ikeda$^{4}$, Noriyoshi Ishii$^{7}$, Keiko Murano$^{7}$, Hidekatsu Nemura$^{3}$, Kenji Sasaki$^{3}$\
(HAL QCD Collaboration)\
title: 'Medium-heavy nuclei from nucleon-nucleon interactions in lattice QCD'
---
Studying the ground and excited states of finite nuclei and nuclear matter on the basis of the quantum chromodynamics (QCD) has been one of the greatest challenges in modern nuclear physics. Thanks to the recent advances in lattice QCD, we now have two major approaches to attack this long-standing problem: The first approach is to simulate finite nuclei (systems with total baryon number $A$) directly on the lattice [@Yamazaki:2009ua; @Beane:2011iw]. The second approach is to calculate the properties of finite nuclei and nuclear matter by using nuclear many-body techniques combined with the nuclear forces obtained from lattice QCD [@Ishii:2006ec]. There is also a third approach where nuclear many-body techniques are combined with the nuclear forces from chiral perturbation theory (see e.g. [@Gezerlis:2013ipa] and references therein); it has a close connection with the second approach through the short distance part of the nuclear forces.
In this article, we will report a first exploratory attempt to study the structure of medium-heavy nuclei ($^{16}$O and $^{40}$Ca) on the basis of the second approach by HAL QCD Collaboration [@Ishii:2006ec]. Before going into the details, let us first summarize several limitations of the first approach (direct QCD simulations of finite nuclei): (i) The number of quark contractions sharply increases for larger $A$, which makes the calculation prohibitively expensive. Even with the help of newly discovered contraction algorithms [@Doi:2012xd], it is still unrealistic to make simulations for medium-heavy nuclei with controlled $S/N$ on lattice. (ii) The energy difference between the ground state and excited states, $\Delta E$, is about the QCD scale ($\sim$ 200 MeV) for single hadrons, while it becomes O(10)-O(100) times smaller for finite nuclei, which implies that extremely large Euclidean time $t \simeq 1/\Delta E \sim 100$ fm or more is necessary to obtain sensible nuclear spectra; (iii) The larger spatial lattice volume $V$ becomes necessary for larger nuclei. This poses a challenge particularly for heavy nuclei and/or neutron-rich nuclei. (iv) Analyzing the detailed spatial structure of nuclei (e.g. the 3$\alpha$ configuration of the Hoyle state of $^{12}$C known to be crucial for the stellar nucleosynthesis) requires much more efforts beyond the calculation of binding energies.
The basic strategy of the second approach is to start with the lattice QCD simulations of nuclear forces in the form of the $A$-body potentials ($A=2,3,\cdots$). The nuclear structures can then be calculated by the nuclear many-body techniques with the simulated potentials as inputs. This two-step approach with the “potential" (the interaction kernel) as an intermediate tool provides not only a close link to the traditional nuclear physics but also a clue to overcoming the limitations (i)-(iv) mentioned above: (i) The effect of the $A$-body potentials would decrease as $A$ increases for finite nuclei, since the empirical saturation density $\rho_0$=0.16/fm$^3$ is rather low. Then, we can focus mainly on the 2-body, 3-body and possibly 4-body potentials, exploiting the modern contraction algorithm [@Doi:2012xd]. (ii) Separation of the ground state and the excited states is not necessarily to obtain the potentials as long as the system is below the pion production threshold [@Ishii:2006ec]. In other words, all of the information for $t > 1$ fm outside the range of inelastic region can be used to extract the potentials. (iii) The potentials among nucleons are always short ranged independent of $A$, so that they are insensitive to the lattice volume [@Inoue:2010es]. (iv) Once the potentials in the continuum and infinite volume limit are obtained, various observables can be obtained, e.g. the scattering phase shifts, the nuclear binding energies, level structures, density distributions, etc.
As a first exploratory attempt, we limit ourselves to the two-body potentials in the $S$ and $D$ waves in this article to study the structure of $^{16}$O and $^{40}$Ca. These potentials were previously obtained in ref. [@Inoue:2011ai] where the Nambu-Bethe-Salpeter (NBS) wave functions between two baryons simulated on the lattice are translated into the two-body potentials on the basis of the HAL QCD method (reviewed in the last reference of [@Ishii:2006ec]). The resultant potentials in the nucleon-nucleon channel were applied to $^4$He with stochastic variational method in ref. [@Inoue:2011ai] and to nuclear matter with Brueckner-Hartree-Fock (BHF) method in ref. [@Inoue:2013nfe].
We employ the standard BHF theory to calculate finite nuclei [@RingSchuck]: The main reason is that the BHF theory is simple but quantitative enough to grasp the essential part of physics, so that it is a good starting point before making precise calculations using sophisticated [*ab initio*]{} methods such as the Green’s function Monte Carlo method [@Pieper:2007ax], no-core shell model [@Navratil:2009ut; @Shimizu:2012mv], coupled-cluster theory [@Hagen:2010gd], unitary-model-operator approach [@Fujii:2009bf], self-consistent Green’s function method [@Dickhoff:2004xx], and in-medium similarity renormalization group approach [@Tsukiyama:2010rj].
Let us briefly recapitulate the basic equations in the BHF theory for finite nuclei to set our notations. The effective nucleon-nucleon interaction is dictated by the $G$ matrix satisfying the Bethe-Goldstone equation $$G(\omega)_{ij,kl} \,=\, V_{ij,kl} \,+\,
\frac12 \, \sum_{m,n}^{\mbox{\tiny un-occ}} \, \frac{V_{ij,mn}\,G(\omega)_{mn,kl}}{\omega - e_m - e_n + i\epsilon},
\label{eqn:gmat}$$ where indices $i$ to $n$ stand for single-particle eigenstates, $V$ is the bare $N\!N$ potential, and the sum is taken for un-occupied states. Given $G$,the single-particle potential $U$ is written as $ U_{ab} = \sum_{c,d} G(\tilde{\omega})_{ac,bd}~\rho_{dc}$, where the indices $a,b,c,d$ are the labels for the harmonic-oscillator (HO) basis. The density matrix $\rho$ in this basis is given by $ \rho_{ab} = \sum_{i}^{\rm occ} \Psi^{i}_a \Psi^{i*}_b$, where $\Psi^{i}$ is a solution of the Hartree-Fock equation, $$\left[ K + U \right] \Psi^{i} = e_i \Psi^{i}.
\label{eqn:scheq}$$ with $K$ being the kinetic energy operator. After determining $G$, $U$, $\rho$, $\Psi^{i}$, and $e_i$ self-consistently,the ground state energy of a nucleus is obtained as $$E_{0} = \sum_{a,b} \left[ K_{a b} + \frac12 U_{a b} \right] \rho_{b a} - K_{\rm cm}.
\label{eqn:gsene}$$ Here $K_{\rm cm}$ corresponds to the subtraction of the spurious center-of-mass motion.
$M_{\rm PS}$ \[MeV\] $M_{\rm V}$ \[MeV\] $M_{\rm B}$ \[MeV\]
-------------------------- ------------------------- -------------------------
1170.9(7) 1510.4(0.9) 2274(2)
1015.2(6) 1360.6(1.1) 2031(2)
836.5(5) 1188.9(0.9) 1749(1)
672.3(6) 1027.6(1.0) 1484(2)
468.6(7) 829.2(1.5) 1161(2)
: Masses of pseudo-scalar meson $M_{\rm PS}$, vector meson $M_{\rm V}$ and octet baryon $M_{\rm B}$ in our calculation taken from [@Inoue:2011ai]. Statistical error is given in parentheses.[]{data-label="tbl:mass"}
![Nucleon-nucleon potentials for S and D waves in lattice QCD at $M_{\rm PS}\simeq$ 470 MeV. The lines are obtained by the least-chi-square fit to the lattice data.[]{data-label="fig:pot_K13840"}](pot_K13840_nuclei2.eps){width="40.00000%"}
For the bare $N\!N$ potentials to be used in eq.(\[eqn:gmat\]), we adopt those obtained on a (4 fm)$^3$ lattice with five different quark masses in the flavor-$SU(3)$ limit [@Inoue:2011ai] as summarized in Table \[tbl:mass\]. As shown in Fig. \[fig:pot\_K13840\], the lattice $N\!N$ potentials in $S$ and $D$-waves at the pseudo-scalar meson mass $M_{\rm PS} \simeq$ 470 MeV share common features with phenomenological potentials, i.e., a strong repulsive core at short distance, an attractive pocket at intermediate distance, and a strong $^3S_1$-$^3D_1$ coupling. Although the potentials reproduce qualitative features of experimental phase-shifts, the net attraction is still weak to form a deuteron bound state [@Inoue:2011ai], while it is strong enough to have saturation of symmetry nuclear matter (SNM) [@Inoue:2013nfe].
Using these lattice $N\!N$ potentials, together with the nucleon mass, as inputs, we carry out the BHF calculation for the ground states of $^{16}$O and $^{40}$Ca nuclei. We choose these nuclei since they are iso-symmetric, doubly magic, and spin saturated, and hence we can assume spherically symmetric nucleon distribution. Due to the limitation of available lattice $N\!N$ potentials at present, we include 2-body $N\!N$ potentials only in $^1S_0$, $^3S_1$ and $^3D_1$ channels. The Coulomb force between protons is not taken into account for simplicity. We follow refs. [@Daveis; @Sauer] about the numerical procedure of BHF calculation, i.e., we solve eq.(\[eqn:gmat\]) by separating the relative and center-of-mass coordinates using the Talmi-Moshinsky coefficient, and adopt the so-called $Q/(\omega - QKQ)Q$ choice, where $Q$ is the Pauli exclusion operator for which we use a harmonic-oscillator one at first then use a self-consistent one for the last few iterations. In eq.(\[eqn:gsene\]), the center of mass correction is estimated as $K_{\rm cm} \simeq \frac{3}{4}\hbar \omega$ with $\omega$ being the a HO frequency which reproduces the root-mean-square (RMS) radius of the matter distribution obtained by the BHF calculation.
![Ground state energy of $^{16}$O at $M_{\rm PS}\simeq$ 470 MeV as a function of $b$ at several $n_{\rm dim}$.[]{data-label="fig:16Odepend"}](nbdepend_16O_tensor_self_H0c_4to9_JK2.eps){width="40.00000%"}
Figure \[fig:16Odepend\] shows the ground state energy of $^{16}$O at $M_{\rm PS}\simeq$ 470 MeV, as a function of the width parameter $b$ of the HO wave function with increasing number of HO basis $n_{\rm dim}$. The solid vertical bar at the rightmost point represents the error for $E_0$ of about $\pm$10% at $b=3$ fm and $n_{\rm dim}=9$. It originates from the statistical error of our lattice QCD simulations estimated by the Jackknife analysis with the bin-size of 360 for 720 measurements as was done in ref. [@Inoue:2013nfe]. Almost the same errors apply to other $E_0$ in the figure. A similar figure for $^{40}$Ca is obtained for the same quark mass. As $n_{\rm dim}$ increases, the binding energy $|E_0|$ increases with the optimal $b$ shifting to larger values. From these results, we can definitely say that self-bound systems are formed in both nuclei at this lightest quark mass, corresponding to $M_{\rm PS}\simeq 470$ MeV and $M_{\rm B}\simeq 1160$ MeV. On the other hand, the existence of deeply bound nuclei is excluded for the other four heavier quark masses, since we do not find $E_0<0$.
In Figure \[fig:levels\], single particle levels of $^{16}$O and $^{40}$Ca at $M_{\rm PS}\simeq$ 470 MeV, are shown for the optimal width parameter with the largest HO basis; $b=3.0$ fm and $n_{\rm dim}=9$. In spite of the unphysical quark mass in our lattice QCD simulations, the obtained single particle levels have the similar magnitude expected for those nuclei in the real world. Also, in the bound region, the level structure follows almost exactly the harmonic oscillator spectra with $\hbar \omega \simeq 22-23$ MeV. Since the spin-orbit force is not included in our lattice nuclear force, the spin-orbit splittings in the $P$ and $D$ states are not seen in the figure.
----------- --------- --------- --------- --------- ---------- ----------- ------------------------------
Radius
${1S}$ ${1P}$ ${2S}$ ${1D}$ $E_0$ $E_{0}/A$ $\sqrt{\langle r^2 \rangle}$
$^{16}$O $-35.8$ $-13.8$ $-34.7$ $-2.17$ $2.35$
$^{40}$Ca $-59.0$ $-36.0$ $-14.7$ $-14.3$ $-112.7$ $-2.82$ $2.78$
----------- --------- --------- --------- --------- ---------- ----------- ------------------------------
: Single particle levels, total energy, and rms radius of $^{16}$O and $^{40}$Ca at $M_{\rm PS} \simeq$ 470 MeV. Energies (radii) are in unit of MeV (fm).[]{data-label="tbl:structure"}
![Single particle levels of $^{16}$O and $^{40}$Ca nuclei at $M_{\rm PS}\simeq$ 470 MeV. Positive energy continuum states appear as discrete levels due to the finite number of bases.[]{data-label="fig:levels"}](levels_16O_40Ca_ndim9.eps){width="40.00000%"}
Table \[tbl:structure\] shows the single particle energies, total binding energies, and rms radii of the matter distributions of $^{16}$O and $^{40}$Ca at $M_{\rm PS}\simeq$ 470 MeV for $b=3.0$ fm and $n_{\rm dim}=9$. Breakdowns of the total binding energies are $$\begin{aligned}
^{16}\mbox{O}: &&\! E_0 = (259.6 - 10.3) -284.0 = \,-34.7 \ {\rm MeV}, \qquad \\
^{40}\mbox{Ca}: &&\! E_0 = (813.4 - \,9.8) -916.3 = -112.7 \ {\rm MeV}, \qquad
\label{eqn:break}\end{aligned}$$ where the first, second, and third numbers are the kinetic energy, the center-of-mass correction and the potential energy, respectively. The total binding energy is obtained as a result of a large cancellation between kinetic energy and potential energy. Principally due to the heavier quark mass in our calculation, the obtained binding energies, $|E_0|$, are smaller than the experimental data, $127.6$ MeV for $^{16}$O and $342.0$ MeV for $^{40}$Ca [@Audi:1993zb].
![Nucleon number density inside $^{16}$O and $^{40}$Ca at $M_{\rm PS}\simeq$ 470 MeV as a function of distance from the center of the nucleus.[]{data-label="fig:density"}](nuclear_density_bhf_lqcd_tensor_self_ndim9.eps){width="40.00000%"}
The rms radii of the matter distribution given in Table \[tbl:structure\] are calculated without the nucleon form-factor and the center-of-mass correction. We found that these radii are more or less similar to experimental charge radii (2.73 fm for $^{16}$O and 3.48 fm for $^{40}$Ca), although our quark mass is heavier. This is presumably due to a cancellation between heavier nucleons and weaker nuclear forces than in the real world. Shown in Fig. \[fig:density\] is the spatial distribution of baryon number density $\rho(r)$ for $^{16}$O and $^{40}$Ca as a function of the distance from the center of the nucleus. The bump and dent at small distance originate from the shell structure which are known to exist in the nuclear charge distribution extracted from the electron-nucleus scattering experiments. We also find that the central baryon density is as high as $2 \rho_0$ for $^{40}$Ca. This is consistent with the fact that the saturation density of SNM for the present quark mass with 2-body $N\!N$ forces is about $2.5 \rho_0$ [@Inoue:2013nfe].
Finally, in Fig. \[fig:adep\], the binding energies per particle $E_0/A$ for $A=4, 16, 40$, and $\infty$ obtained by using the same lattice potential at $M_{\rm PS}\simeq$ 470 MeV are plotted as a function of $A^{-1/3}$. The stochastic variational method is used for $^4$He [@Inoue:2011ai], while the BHF method is used for SNM [@Inoue:2013nfe]. To make a fair comparison to these cases, we carry out a linear extrapolation of the binding energies of ${\rm ^{16}O}$ and ${\rm ^{40}Ca}$ to $n_{\rm dim}=\infty$ through the formula, $E_0(A;n_{\rm dim})= E_0(A;\infty) + c(A) /n_{\rm dim}$. The linear formula fits our results well, although the convergence to $n_{\rm dim}=\infty$ is relatively slow. (The faster convergence may be achieved by employing the approaches such as $V_{{\rm low} k }$ and the similarity renormalization group [@Hagen:2010gd]). Our procedure leads to $E_0(16;\infty)/16=-2.86$ MeV and $E_0(40;\infty)/40=-3.64$ MeV. Note that these numbers are subject to the $\pm$10% uncertainty due to the statistical error in the $N\!N$ interactions from lattice QCD as mentioned already. Although the magnitude of $|E_0/A|$ for $^{16}$O, $^{40}$Ca, and SNM are a factor of 3–4 smaller than the empirical values, its $A$ dependence is uniform and can be approximated by the Bethe-Weizs[ä]{}cker type mass formula, $E_0(A) = - a_{\rm V} A - a_{\rm S} A^{2/3}$, with $a_{\rm V}=5.46$ MeV and $a_{\rm S}=-6.56$ MeV. It would be interesting in the future to study the quark mass dependences of $a_{\rm V,S}$ in the lighter quark mass region and investigate how these coefficients approach the empirical values, $a_{\rm V}^{\rm phys}=15.7$ MeV and $a_{\rm S}^{\rm phys}=-18.6$ MeV.
![Mass number $A$ dependence of nuclear energy per nucleon $E_0/A$ for $M_{\rm PS}\simeq$ 470 MeV. The Bethe-Weizs[ä]{}cker mass formula up to the second term, $E_0/A=-a_{\rm V} - a_{\rm S} A^{-1/3}$, corresponds to a straight line in this figure.[]{data-label="fig:adep"}](adependence_LQCD_K13840_ndim9_extr_wo.eps){width="40.00000%"}
In this Rapid Communication, we have shown that properties of medium-heavy nuclei can be deduced by combining the nuclear many-body method with the nuclear force obtained from lattice QCD simulations. Using the BHF theory with 2-body $N\!N$ potentials at $M_{\rm PS}\simeq$ 470 MeV, we found bound nuclei for $^{16}$O and $^{40}$Ca, and we could extract their binding energies, single-particle spectra, and density distributions. Even though our setup is still primitive in various places, our results demonstrate that the HAL QCD approach to nuclear physics is quite promising for unraveling the structure of finite nuclei and infinite nuclear matter in a unified manner from QCD.
In the present study, we have neglected the nuclear forces in $P$, $F$ and higher partial-waves, in particular the effect of the spin-orbit ($LS$) force: For nuclei with $A>40$, the $LS$ force plays a crucial role in developing the magic numbers. Therefore it will be an important next step to include the $LS$ force recently extracted from lattice QCD simulations [@Murano:2013xxa]. The 3-body force may also play an essential role for accurate determinations of the binding energy and the structure of finite nuclei as well as nuclear matter. Study of the three-nucleon force in QCD is also in progress [@Doi:2011gq]. Finally, the masses of up and down quarks in this study are much heavier than the physical values. We are currently working on the almost physical point lattice QCD simulations with the lattice volume (8 fm)$^3$ on the K-computer at RIKEN AICS. Lattice QCD potentials obtained in such simulations together with advanced nuclear many-body methods will open a new connection between QCD and nuclear physics.
This research is supported in part by Grant-in-Aid of MEXT-Japan for Scientific Research (B) 25287046, 24740146, (C) 26400281, 23540321 and SPIRE (Strategic Program for Innovative REsearch). T.H. was partially supported by RIKEN iTHES Project.
[99]{}
T. Yamazaki, Y. Kuramashi and A. Ukawa \[PACS-CS Coll.\], Phys. Rev. D [**81**]{}, 111504 (2010); T. Yamazaki, K.I. Ishikawa, Y. Kuramashi and A. Ukawa, Phys. Rev. D [**86**]{}, 074514 (2012). S. R. Beane [*et al.*]{} \[NPLQCD Coll.\], Phys. Rev. D [**85**]{}, 054511 (2012), Phys. Rev. D [**87**]{}, 034506 (2013);
N. Ishii, S. Aoki and T. Hatsuda, Phys. Rev. Lett. [**99**]{}, 022001 (2007); S. Aoki, T. Hatsuda and N. Ishii, Prog. Theor. Phys. [**123**]{}, 89 (2010); N. Ishii [*et al.*]{} \[HAL QCD Coll.\], Phys. Lett. B [**712**]{} (2012) 437; S. Aoki [*et al.*]{} \[HAL QCD Collaboration\], (HAL QCD Collaboration), Prog. Theor. Exp. Phys. (2012) 01A105 A. Gezerlis, I. Tews, E. Epelbaum, S. Gandolfi, K. Hebeler, A. Nogga and A. Schwenk, Phys. Rev. Lett. [**111**]{}, 032501 (2013) \[arXiv:1303.6243 \[nucl-th\]\].
T. Doi and M. G. Endres, Comput. Phys. Commun. [**184**]{}, 117 (2013); W. Detmold and K. Orginos, Phys. Rev. D [**87**]{}, no. 11, 114512 (2013); J.Günther, B. C. Toth and L. Varnhorst, Phys. Rev. D [**87**]{}, no. 9, 094513 (2013)
T. Inoue [*et al.*]{} \[HAL QCD Coll.\], Phys. Rev. Lett. [**106**]{}, 162002 (2011); T. Inoue [*et al.*]{} \[HAL QCD Coll.\], Nucl. Phys. A [**881**]{}, 28 (2012). T. Inoue [*et al.*]{} \[HAL QCD Coll.\], Phys. Rev. Lett. [**111**]{}, 112503 (2013).
P. Ring and P. Schuck, [*The Nuclear Many-Body Problem*]{}, (Springer, Berlin, 1980). G.E. Brown, T.T.S. Kuo, [*et al.*]{}, [*The Nucleon-Nucleon Interaction And The Nuclear Many-Body Problem*]{}, (World Scientific, Singapore. 2010).
S. C. Pieper, Riv. Nuovo Cim. 31 (2008) 709-740 P. Navratil, S. Quaglioni, I. Stetcu and B. R. Barrett, J. Phys. G [**36**]{}, 083101 (2009) N. Shimizu, T. Abe, Y. Tsunoda, Y. Utsuno, T. Yoshida, T. Mizusaki, M. Honma and T. Otsuka, Prog. Theor. Exp. Phys. (2012) 01A205 \[arXiv:1207.4554 \[nucl-th\]\].
G. Hagen, T. Papenbrock, D. J. Dean and M. Hjorth-Jensen, Phys. Rev. C [**82**]{}, 034330 (2010) S. Fujii, R. Okamoto and K. Suzuki, Phys. Rev. Lett. [**103**]{}, 182501 (2009).
W. H. Dickhoff and C. Barbieri, Prog. Part. Nucl. Phys. [**52**]{}, 377 (2004) \[nucl-th/0402034\]. K. Tsukiyama, S. K. Bogner and A. Schwenk, Phys. Rev. Lett. [**106**]{}, 222502 (2011) K. T. R. Davies, M. Baranger, R. M. Tarbutton and T. T. S. Kuo Phys. Rev. [**177**]{}, 1519 (1969).
P. U. Sauer, Nucl. Phys. A [**150**]{}, 467 (1970).
G. Audi and A. H. Wapstra, Nucl. Phys. A [**565**]{}, 1 (1993).
K. Murano [*et al.*]{} \[HAL QCD Coll.\], Phys. Lett. B [**735**]{}, 19 (2014) T. Doi [*et al.*]{} \[HAL QCD Coll.\], Prog. Theor. Phys. [**127**]{}, 723 (2012).
|
Or if it is forbidden to view this content in your community, we’ve created a state of the art mobile app that makes dating on the go easier than ever. With that in mind, i even have one lined up for tonight! Whether it be a lasting relationship or simply a date for Saturday night, that breaks the ice for us and then when we meet in person I dating sites in denver co a lot more comfortable.
Ever since I signed up for this site; i’m a bit shy so I wasn’t confidant that I would end up meeting any if left to my own devices at bars and clubs. I can’t see myself ever wanting to get rid of my membership! If you are under 18 — you must be 18 years of age or older to enter. We have seen every dating site out there, i had recently been broken up with by my college girlfriend and wanted to see what else was out there.
Even though I’m quite good looking, i can’t believe I waited so long! We just never found anyone that we clicked with, i wasn’t expecting anything to come of it but within a day, i just got out of a bad relationship and knew it was time I got out there and have some fun!
A good dating website isn’t just judged on the usability of the interface, ever since signing up I’ve been having so much fun and going out with a bunch of different women. I have one this weekend with a beautiful lady that I’m really looking forward to. We just never found anyone that we clicked with, anyone who has ever been on a dating site knows that there are a lot of ones that are really ineffective. |
It was just almost a month ago now.
Guess I’m a little behind.
He loves school. I love his uniform. Easy and simple, no struggles over what to wear in the morning.
Yes, I see there are spots, I think he spilled yogurt and I tried to salvage the “jumper” aka sweater so we didn’t have to go for the backup one.
He is doing great in school. He has finally adjusted to the full days of school, although I am not adjusted to the fact that he has nightly homework.
He’s a leftie who seems to become a rightie when he’s been in school for some time (they assure me they are not trying to influence him). As he comes into his own his creativity and perceptiveness are showing themselves. I love this about him.
He is friends with EVERYONE and is always asking for their number so I can call them for playdates. Overall, it is great to see Eli blossom into such a social, delightfully humorous little boy. Here are his friends Sinclair, Bernadette (twins), Maanov and Ishie at school, posing for my camera on the 1st day:
Posted by Cindi on September 21, 2009 at 9:31 pm
Eli you look so handsome!!! Tell mom she needs to get used to the homework, its the norm now. Even your cousin Alek has nightly homework and he has since last year. by the way Eli what do you want for christmas since your mom won’t answer??
Love ya
Aunt Cindi |
He didn't want to be on the wrong end of a 5-1 loss to the St. Louis Blues in Game 6 of the Western Conference Final at Enterprise Center on Tuesday.
He didn't want to be going home while the Blues advanced to play for the Stanley Cup, a trophy he has unsuccessfully pursued since he broke into the NHL with the Boston Bruins at age 18 for the 1997-98 season.
[RELATED: Complete Sharks vs. Blues series coverage]
He didn't want to be standing by his crease after hugging San Jose goalie Martin Jones, his gloved hands resting on the knob of his stick, watching the Blues play pig-pile in the far corner as 'Gloria,' their theme song during the Stanley Cup Playoffs, blared over the speakers, the crowd erupting into a spontaneous party as the city will host a Stanley Cup Final game for the first time in 49 years, against the Bruins in a best-of-7 series that starts Monday in Boston.
Thornton didn't want to trudge through the handshake line to congratulate an opposing team at the end of a too-short playoff run for the 17th time in his storied career.
"We were right there," Thornton said wistfully. "They played great. Hats off to them. That's a real good hockey team over there."
St. Louis' excellence can't diminish Thornton's disappointment.
He has played 179 NHL playoff games and has reached the Cup Final once, a six-game loss to the Pittsburgh Penguins in 2016.
The 39-year-old center didn't want to talk about his future or what he was thinking as the final seconds of his dream ticked away yet again.
Afterward, he looked about at the crowd surrounding him in the cramped, quiet, visitors' dressing room, questioners probing for any hint of what he was thinking.
"Just … I don't know, I don't know," he said, looking up at the scrum, looking down at his towel-draped knees, and then excusing himself. "OK. Thanks, guys."
Video: SJS@STL, Gm6: DeBoer discusses West Final loss
Thornton, who had 10 points in 19 games this postseason, can become an unrestricted free agent July 1.
He is one of several players on the Sharks who face an uncertain future, and it is highly unlikely the 2019-20 roster will have the same personnel.
"You play with guys for eight months. Every day you got a schedule, you come to the rink, you see the guys, go on the road and then it comes to an abrupt end," forward Logan Couture said. "You don't know what to do with yourself. Then changes are made. Some of the guys you may not see until you play them the next season.
"It's the worst part of playing in this league. There are many positives, that's probably the biggest negative. But we expect there are going to be changes, and right now it's just too early to think about it."
Thornton could easily be among the changes, although there is still life left in his legs and magic in his hands. He had 51 points (16 goals, 35 assists) in 73 regular-season games, his highest point total in three seasons, and has 1,478 points (413 goals, 1,065 assists) in 1,566 games in 21 NHL seasons.
If he is not back, he will walk away from the team and city he has called home since being acquired in a trade from the Bruins on Nov. 30, 2005.
San Jose is where he grew from a confused young man trying to carry the burden of being the No. 1 pick in the 1997 NHL Draft into an undeniable star.
And if he walks away, either as a free agent or through retirement, there will be regrets about how it all ended here Tuesday.
"Well, you know, he's the face, he's the heartbeat of the organization," said Peter DeBoer, who has coached Thornton for the past four seasons. "What do you say? I think, like all the players in that room, as coaches we're disappointed for not helping him get there because he gives you everything he's got and should be there.
"So, you know, it's hard not to feel responsibility as one of the people around him for not helping him get where he belongs. He belongs playing for a Stanley Cup."
---
Listen: Stanley Cup Final preview from NHL Fantasy on Ice podcast |
Stress variables add differential diagnostic information between ischemic and nonischemic cardiomyopathy over myocardial perfusion SPECT imaging.
Noninvasive differentiation between ischemic cardiomyopathy (ICM) and nonischemic (NICM) cardiomyopathy is frequently difficult. The aim of this study was to analyze the value of stress test and stress-rest gated single-photon emission computed tomography (SPECT) criteria to differentiate between ICM and NICM. Data pertaining to 145 consecutive patients (mean age: 63±11 years, 24 women) assessed by means of stress-rest gated SPECT with Tc-tetrofosmin, with left ventricular ejection fraction less than or equal to 40% (107 patients with ICM and 38 with NICM according to coronary angiography) and known coronary anatomy, were analyzed. Multivariate analyses of gated SPECT variables identified a summed stress score greater than 21 [odds ratio (OR) 7.67, 95% confidence interval (CI): 2.85-20.58)] and divergent pattern (OR 6.84, 95% CI: 1.83-25.5) as predictors of ICM, and analysis of exercise test variables disclosed metabolic equivalents less than or equal to 7.3 (OR 10.75, 95% CI: 3.64-31.81) and ST depression of at least 1 mm (OR 6.97, 95% CI: 1.42-34.3) as independent predictors of ICM. The exercise test variables had a significant additional predictive value of ICM over gated SPECT variables (P<0.001). Estimated functional capacity and ST depression improve the diagnostic value of stress-rest SPECT to differentiate between ICM and NICM. |
Sad Men
Bitch has heard on the Charlotte Street grapevine that screen tests have been happening up and down agency land over the past few weeks, to find a host of panellists to take part in an upcoming game show on Channel 4. The show is looking to surf the wave of popularity created by 'Mad Men'.
Apparently, a brief went out from the Institute of Practitioners in Advertising (IPA) to find willing victims for the show, and employees from media and creative agencies up and down Fitzrovia have been preening themselves ever since, which Bitch finds a little sad.
Remember though, before you get too big for your boots, Bitch is the queen of this particular jungle, and she's waiting on the call from Channel 4 – she’s sure that Channel 4 chief executive David Abrahams or new sales director Jonathan Allan can put a word in for her…xxx
Pussy passed on
Bitch is sad to hear that pussy is no longer active at the7stars – that’s to say that the agency’s cat has died.
Well, Bitch may be pushing it a bit too far (something she gets accused of regularly), as the adopted agency cat, called Tom Paine, actually belongs to the pub the agency is named after, where the founders Jenny Biggam, Mark Jarvis and Colin Mills plotted their plan for media world domination (or at least the domination of the local tapas restaurant Baricca on Goodge Street) back in 2005.
The cat’s owner Roxy Beaujolais (you couldn’t make it up darlings), the proprietor of the pub, Bitch thinks must be understandably distraught, but not as distraught as she was when she first found the agency had been named after the Holborn boozer, but soon came round to the trio's charms.
However, there was one stipulation – that any other ventures from the media agency start-up must also be named after Roxy’s other establishment, a restaurant. So Bitch is looking forward to the future 7stars conflict agency…The Bountiful Cow!!!
RIP Tom Paine.
Smooth operator
You know how much Bitch loves the armed forces (they can drop and give this girl 20 anytime), so she is pleased to hear that the annual Help for Heroes event is happening at GMG Radio's Smooth and Real Radio stations today.
The event is in its third year and guests today include Arctic explorer Sir Ranulph Fiennes, 'X-Factor' hunk Matt Cardle and the fabulous John Barrowman. Bitch does think that it was a bit of a bad move to photograph the 61-year-old David ‘Kid’ Jensen next to the 31-year-old opera singer Katherine Jenkins, especially when he looks like he's just had a blonde rinse.
Good luck today Smooth and Real Radio petals...
Draining the bar
Bitch hears that the naughty lot at out-of-home specialists Ad Media have been enticing agency folks out for drinks again (hope it stays within the confines of the Bribery Act, outdoor types), when the company held a Drinks & Demo event at Dr.Inks, just a stone's throw from WPP agency MEC.
Around 60 people from MEC turned up to take advantage of the free bar (Bitch would act shocked, but she knows the MECers and they're anyone’s for a free pint). Attendees were able to enter a competition via Admedia's mobile-enabled panels to win £250 worth of LastMinute.com vouchers. The lucky winner of the competition that partnered the drinks festival to win £250 of Lastminute.com vouchers, was Jake Mason, a planner buyer at the agency.
He is pictured (above, with the rather fetching glitter wrapping paper) with James Whitbread, account manager at Admedia, who hosted the night.
Under two weeks to go until the Media Week Awards and then it's Halloween, pumpkins. See you next week. Bitch xxx
Before commenting please read our rules for commenting on articles. |
We Are Available For All Of Your Electric Needs!
We Have Been Trusted For Many Years
We Have Many Electricians On Call We Offer Emergency Support 24/7
Top-notch Quality, Expert Virtue, And The Highest Degree Of Efficiency Around.
Highest Degree Of Productivity Around. SW Electrical is a full service electrical provider based out of Ledcourt, with a history of successful projects and electrical setups. Our approach to our work is straightforward; we desire thoroughly satisfied patrons, end of story.
Electrical Maintenance
Lighting
And Repair
Electrical Installation
Circuit Breaker Panels
At SW Electrical, we pride ourselves on going that extra mile to satisfy our customers completely.
Let’s us help you with all your electrical needs!
As a homeowner, you will certainly need an electrician at some point, whether it is for safety upgrades or surprise repairs. It’s vital to hire an electrical contractor you are able to rely on to do the job effectively without cutting any sort of corners. At SW Electrical, we provide all-encompassing electrical support services to residences in and around Ledcourt. If you simply need an electrical outlet installed or a lighting component serviced, we have all the resources and skills needed to tackle whatever project you have for us. Regardless of the overall size of your house or how large your electrical project can be, when you work with us, you’ll profit from our amazing services, adapted just for you!
Why Choose Us
The connections we have created with our patrons are our business’s most valued assets. We treat every job the way we would manage our very own house, with pride, appreciation, and reliability. Look below and see for yourself why you should pick us.
Here at SW Electrical, we take our job very seriously. All of our electrical experts are licensed as well as insured, and we support that with 100% service guarantee.
At SW Electrical we only work with the absolute best! Every one of our electrical techs are completely certified, highly skilled and qualified in every single element of the electric field.
The only way to truly see the true face of a service provider is to hear from former customers. Visit our testimonials section to see what folks are saying about our company.
We know that unexpected emergencies take place, and can not be helped. Connect with SW Electrical, day or night, 365 days a year, for specialized electrical repair and emergency situation repair assistance.
Our electrical maintenance and repair support services are quick, friendly, and really affordable.
We are the professionals, and our electric professionals are the absolute best in the electrical field. We realize exactly how to take care of all the challenging jobs. With that said, no electricity job is too large or small for our electrical technicians. If your needs involve electrical repair work, we can easily support each of them. We are skilled, experienced, and our customers count on SW Electrical every time they need any type of electric repair, installment or maintenance.
24 -Hours Emergency Services Uniformed, Licensed Electricians No Travel Charges Licensed And Insured Free Estimates
Do you need help with electrical maintenance?
We are here to serve you! Phone us right now, or click below to receive a COST-FREE estimate!
We also provide Air Conditioning Installation in the following towns
More About Ledcourt
Top Rated Air Conditioning Installation Providers In Ledcourt
We can respond to all types of installation for any kind of cooling system. Our installation and maintenance vehicles are constantly fully-equipped and our professionals are always all set to react to a call. We have actually workplace. In addition we also provide utmost regard to your option of brand if you have any specific option and trust it the most.
We have all types of a/c systems including ducted air, split systems and multi systems. While you can have your own choice, our engineers will constantly keep your spending plan and other requirements in mind and provide a recommendation based upon our technical knowledge. But you can always make your own decision.
Free Air Con Install Quotes And Honest Recommendations In Ledcourt
When we install your cooling system in Ledcourt, we will offer the complete range of solutions. This will consist of an initial assessment, sincere recommendations and free free quote. Once the installation is complete, we will also provide after-sales maintenance. You will get a service warranty based on the brand chosen. All our service technicians are completely insured, so you will not need to fret when we are working at your house or work environment.
There are lots of factors for you to pick us. We utilize just the highest quality parts and materials. We deliver whatever we guarantee to you. Most notably, we guarantee that our clients don’t deal with any surprises. You will discover that we are always working towards building a positive track record by delivering the very best options and attaining customer fulfillment. We are fully insured and licenced to supply air conditioning setup, repair and maintenance services.
Indications That Your AC System Needs To Be Replaced In Ledcourt
There are a number of signs that your air conditioning unit will have to be repaired. If you see odd sounds or smells coming from your system it will need to be repaired. This could be an indication that mold and mildew is expanding in the ventilation or that parts have gotten loose. Another indication that your unit isn’t functioning effectively is drastically lowered airflow. You can examine this by turning on your Air Conditioning unit and putting your hand near the vent. If the air flow isn’t really as strong as you remember this could imply you need to get your system repaired.
We Offer Exceptional Services In Ledcourt
When you prefer to collaborate with us, you can be certain that you will receive the most professional and cost efficient cooling maintenance solutions Ledcourt has to offer. Thanks to our superb services our clients can unwind, knowing that all precautions have actually been taken to keep their a/c system operating at their absolute best. When you want the very best care for your ac system, you could call us at any time to find out more about what we have to offer.
Fast And Trusted Service
Our quick solution is what we are known for, and our professional, lasting results are what keep our clients returning. Our company believe in making your HVAC system function the way it should with little work on your part.
Our specialists have years of experience in the industry and also successfully done hundreds of air conditioning service Ledcourt for commercial, industrial and domestic systems. We provide unbiased advice for free that would save you cash and decrease your down time. You can be certain that your a/c systems are running at their best with us on the job. |
Questions raised about giant piezoresistance
Four years after scientists in the US reported seeing "giant piezoresistance" in silicon nanowires, a team of researchers in France and Switzerland claims that this phenomenon may not exist after all.
Giant piezoresistance is a large change in electrical resistance that occurs when a material is stretched. After it was first reported in tiny silicon wires, claims were made that it could significantly improve nanoelectronic devices, such as nanoscale transistors, and help make ultrasensitive nanosensors.
Now, new work by Jason Milne and Alistair Rowe at the Ecole Polytechnique, Steve Arscott of IEMN-CNRS and Christoph Renner at the University of Geneva calls such applications into question.
Physicists have known about piezoresistance (PZR) – whereby the electrical resistance of a semiconductor changes when a small mechanical stress is applied to it – for many years. Giant piezoresistance occurs when there is a much larger change in resistance for the same applied strain. For example, the change in resistance per unit of strain (the "gauge factor") typically ranges up to 100 in bulk silicon but in giant PZR, this value can reach several thousand.
Practical applications
Giant PZR would find many practical applications. For example, it might be used to detect motion in nanomechanical systems (NEMS) because traditional detectors lose their sensitivity at these length scales. Furthermore, because mechanical stress is currently employed to enhance the performance of electronic devices (in so-called "strain engineering"), it might also help enhance nanoscale transistors too.
The very act of measuring the resistance changes its value
Alistair Rowe, Ecole Polytechnique
Four years ago Peidong Yang's team at the University of California at Berkeley first observed giant PZR in silicon nanowires and the discovery created a flurry of interest in labs worldwide. Indeed, the researchers measured gauge factors up to almost 6000. The effect was thought to be a new phenomenon occurring in an otherwise well-characterized material resulting from the sample's reduced size and characteristic surface states.
In a paper just published in Physical Review Letters, the France-Switzerland team claims that these observations were probably artefacts in no way related to the mechanical stress applied to the silicon nanowires. They were, instead, caused by surface trapping of charges induced by the voltage applied to measure the resistance. "In other words, the very act of measuring the resistance changes its value," explained Rowe.
Non-stress-related drift
PZR is usually measured by performing a standard resistance measurement on a sample while gradually changing the applied mechanical stress on it. The trouble is that any non-stress-related drift in the value of the resistance cannot be separated from that caused by the applied stress.
The France-Switzerland team says it overcame this problem by applying an oscillating stress to its samples. In this way, stress repeatedly increases and then decreases as function of time. "This is a fairly standard technique (called heterodyne detection) in physics and engineering and is used to separate two or more signals and give artefact-free measurements," said Rowe.
According to Rowe, scientists had never applied heterodyne techniques to PZR measurements before, so previous measurements revealed large (but not stress-related) resistance changes in the silicon nanowires. "This meant that the resistance drift due to charge trapping (also known as dielectric relaxation) was assumed to be the result of the applied stress", he added. "This now appears to have been an incorrect assumption."
Top-down or bottom-up?
Yang himself disagrees: "They are reporting PZR measurements on a collection of top-down micro- and nano-wires while our measurements were on bottom-up grown nanowires. Their results might not actually be that surprising as we now all know that bottom-up synthetic bridging nanowires have quite different strain levels, surface states and dopant profiles from those of top-down fabricated ones. In fact, the lack of giant PZR effect in such nanowires was already reported back in 2003. However, the lack of giant PZR effect in these new fabricated samples should not automatically imply the same in our synthetic bridging nanowires.
The observed PZR effect in our nanowires, whether it is intrinsic or from the surface states effect, has already proven to be useful
Peidong Yang University of California at Berkeley
"After all, the observed PZR effect in our nanowires, whether it is intrinsic or from the surface states effect, has already proven to be useful," he added. "For example, we recently demonstrated the first piezoresistively transduced very high frequency silicon nanowire resonators with on-chip electronic actuation at room temperature. We clearly showed that, for very thin silicon nanowires, their time-varying strain can be exploited for self-transducing the devices' resonant motions at frequencies as high as 100 MHz. This simply would not be possible without the enhanced PZR effect."
The debate looks set to continue.
About the author
3 comments
If measuring of resistance changing its value then we should apply Heisenbergs uncertainty princip and then there is a question if when we use frequency of streching reaching 100 MHz the Heisenberg uncertainty principle is disturb in some way. But maybe giant piezoresistantance is realy changing inner structure of material in returning way. But if material is streching, then nucleis of material are far from each other and also orbitals of electrons are changing and this can cause changing of rezistance. |
Save On Los Angeles Hotels Hotels Voucher 2020
There are two committed conference room, and groups can likewise schedule other rooms throughout the resort, consisting of the dining establishment, Skybar, swimming pool cabanas, and specific resort collections, like the Penthouse. The on-site Exxhibit Store markets a curated choice of upscale clothing, earbuds, jewelry, elegance items (like Lux nail gloss), as well as various little gangsta, a bit stylish: The Line Hotel LA is not a secure selection yet questions if it’s the ideal one. “Just how long you people remaining in Koreatown?” The Korean cabby seems amused or maybe bemused by the suggestion people remaining in one of the city’s grooviest, yet still slightly abrasive, up-and-coming ‘hoods.
Save On Los Angeles Hotels Hotels Voucher 2020
Really, we’re a little concerned ourselves. Once within, nevertheless, the staff participants on the front workdesk are appeal itself. They dance as they talk or rather yell over the hubbub and also recommend we update from a ‘Requirement’ to a ‘Hollywood Sight’ area due to the fact that it could be more peaceful. While they comfort us the DJ generally end up around 1am mid-week, the songs can take a trip up on the entrance hall side.
Just when it resembles they can’t in fact locate a Hollywood View room, among the assistants claims, “We’re upgrading you to a suite on the 12th flooring”. Sweet! The Line occupies a 12-storey, very early ’60s modernist tower on Wilshire Boulevard, totally refitted as well as reconditioned by the Sydell Group. (Also non-hotel groupies possibly know regarding Sydell’s hip Ace as well as Wanderer hotels in New York.) At The Line they have actually used the K-town zeitgeist, working with LA’s rap-star chef, Koreatown king Roy Choi.
His Kogi food vehicles placed a new spin on multi-culti with his Oriental barbecue-stuffed tacos, and also his innovative style permeates The Line. Our suite resembles a commercial shell raw, rough concrete walls and revealed avenues, however with floor-to-ceiling home windows (that open) as well as a bed that’s purposefully placed for the sight, and very comfy into the deal (Save On Los Angeles Hotels Hotels Voucher 2020).
Save On Los Angeles Hotels Hotels Voucher 2020
There’s also major attention-to-detail in the high quality of the bed linens as well as the installations: a massive TELEVISION, full-size desk, reenergize terminal for all your iThings, a bedside remote to operate the lights and blinds (including block-outs ideal for late-sleeping rock celebrities). The collection has a sitting space with a sofa that is half-chesterfield, half-drapey dressing dress, another TELEVISION as well as a fully equipped bar.
If you do remain at The Line, certainly take a Hollywood Sight area, and also if you can manage a collection after that go for it. From the top degrees of the hotel, it’s a wonderful LA view, a scenic view sweeping across to the magic Hollywood Hills. It positively sparkles during the night (and also it’s quiet simply the gentle hum of a large city).
Reality be informed, the indication does appear fairly far, yet it’s strangely mesmerising. Given Choi’s account, much of the resort’s energy originates from the restaurants, as a lot a drawcard for outsiders when it comes to resort guests. The Pot is the celebrity, possibly because it catches the type of down-to-earth, Korea-meets-USA restaurants you’ll likely come across when you head out into Koreatown itself other than with louder songs.
Save On Los Angeles Hotels Hotels Voucher 2020
This is a 388-room, four-star LA hotel, yet its trademark restaurant evokes a pared-back, vibrantly lit canteen. The sometimes-cryptic menu names (Boot Knocker, Poke Me) remain in English, but it, delicate china cups and also oversized flowery paper napkins.
Lunch and dinner have a SoCal fresh-produce health and wellness kick. The entrance hall can get busy. Pot CaF (for a quick snack or grab-and-go pastry shop with Asian sticky buns, coffee and cost-free wi-fi) is low-cost, popular and buzzes in the early morning. The little lobby bar has a quirky mixed drink listing (curry and also kimchi in mixed drinks really?), however when the DJ obtains going during the night it can overflow big time.
Simply do it bro’! 3515 Wilshire Blvd, Koreatown, Los Angeles, U.S.A. The Line is a hip hotel with creative design, Korean food as well as songs in abundance, in the heart of LA’s Koreatown (Save On Los Angeles Hotels Hotels Voucher 2020). If you ‘d like to see another side of LA, this is it. Maybe not excellent if it’s your very first time in LA it’s ‘up as well as coming’ rather than ‘in fact arrived’.
Save On Los Angeles Hotels Hotels Voucher 2020
Outstanding. The team were trendy and also kinda indie, valuable, pleasant and also inviting. Concrete walls and also exposed channels may be an obtained preference yet there was deluxe in the bed linen, the beverages bar, the restroom as well as that view! Genuine Oriental at The Pot, K-town ‘Country Club’ at Commissary, as well as a space service menu that consists of Spam and also eggs and a thermos of instant ramen noodles.
In the in 2014, a 2 star resort near 90036 can be as inexpensive as $111.32 per evening. (based upon HotelPlanner prices)In the in 2015, the typical 3 celebrity hotel near 90036 has actually been $148.44 per night. (based upon HotelPlanner costs)In the last year, the typical 4 celebrity hotel near 90036 has been $225.21 per night.
Alyssa Powell/Business Insider When you’re a Los Angeles-based amusement and way of living reporter like me, you often find yourself at the Four Seasons Resort Los Angeles at Beverly Hills. That’s because the resort is well understood as one of the very best seen-and-be seen locations for a functioning see as well as a power lunch or dinner around.
Save On Los Angeles Hotels Hotels Voucher 2020
But it’s not all stiff business and rule – Save On Los Angeles Hotels Hotels Voucher 2020. As a Four Seasons, the hotel is inviting to households as well as travelers of all kinds seeking the consistency of a popular international high-end chain. Of program with the passing celeb and bold-faced name, it’s likewise wonderful for people seeing. I count myself fortunate to call it something of a second residence in community.
I’ll never ever forget the moment I visited with my spouse for a staycation when I was nine months expectant, or much more lately, over the joyful holiday when the typical areas were adorned with flower and also design installations from star florist Jeff Leatham. Save On Los Angeles Hotels Hotels Voucher 2020. Because of this, it’s not a surprise it made our listing of the best resorts in Los Angeles for 2020. Rates for entry-level areas right here begin in the $400s, according to online search results at the time of publication.
On my current Christmas keep, I stayed in a mid-tier Deluxe Veranda King area, which was comped for evaluation purposes, yet usually books for $595. I’ve constantly seen basic rooms, which are really larger, and also with a lower rate point. Unless you’re traveling with family members or planning to delight in-room, in my experience, a standard-sized area is the finest value for luxury accommodations that are roomy, extravagant, and also prime for people-watching. |
Ugh. Weigh day brought no joy. I’m at absolutely the same level as I’ve been for the last two weeks. Much as I’m singing the blues, I know exactly why I’ve hit such an early plateau. Wine with dinner. Pasta. Cream sauces and/or pesto. Cookies. Portions too big. Not enough steps taken during the week. Yeah, the last few days have been nutty – crisis after deadline after urgent meeting. I haven’t been able to tear myself away from the keyboard. Even though I swam like a maniac yesterday, the digital readout has not budged.
No excuses. Over the lips; onto the hips. I have to get back to basics. More stretching. More mediation. Follow the principles of weight loss – less in, more off.
On the positive side, the sun has shifted direction and the slant of light cuts across the kitchen in a wonderful way. The thermometer says that it’s 23 degrees on the deck, but that’s deceptive, because the sensors are on the side of the house tucked into the brick doorway that heats up in the sunlight.
Another week is done. The harsh mountains of filthy ice have finally melted from the driveway and from the front gardens. The weeds are very green and the squirrels have been mating like mad and digging up the lawn searching for non-existent nuts. I bought two big bags of fertilizer at Costco yesterday and it will soon be time to unpack the garden tractor and sharpen up my tools.
Life is good.
Tagged: staying the course |
The molecular circadian clock regulates rhythmic transcription of thousands of genes throughout the brain and body, providing transcriptional coordination of a broad range of processes including metabolism, immune function, and DNA repair. In turn, molecular clock disruption is associated with a wide range of diseases such as heart disease, diabetes, obesity, cancer, and psychiatric disorders. Circadian rhythms in mammals are regulated by the suprachiasmatic nucleus (SCN) in the hypothalamus, which serves as the "master clock" for the brain and body. However, recent studies have revealed that clock genes are rhythmically expressed throughout the brain and play critical roles in the regulation of normal brain processes.
For example, clock genes are involved in regulating rhythms in long-term potentiation, dendritic spine regulation, receptor trafficking, and neuronal activity in brain region, and cell-type specific manners. Furthermore, recent studies have suggested a critical role of the circadian system in several disorders, including major depression, bipolar disorder, schizophrenia, anxiety, stress regulation, eating disorders, drug addiction, and alcoholism, as well as age-related cognitive deficits including Alzheimer\'s disease. The association of circadian rhythm disruption with cognitive function, mood, immune system disruption, and metabolism in shift workers as well as people with mood disorders provides further evidence for the effects of circadian disruption on synaptic plasticity. This research area is at the interface of neuroscience, cell biology, endocrinology, and psychiatry. Despite the evidence that circadian rhythms are involved in coordination of several neural processes and in turn behaviors, this topic has not been extensively studied thus far. There is much that is not known regarding how and why circadian rhythms regulate brain processes.
The articles in this special issue represent a multidisciplinary collection of recent advances in the role of the circadian system in normal brain functions and psychiatric disorders. They illustrate the continuing effort to understand the role of the circadian system in the regulation of normal brain processes and in diseases states, as well as the potential to take advantage of circadian regulation as a tool for the development of therapeutic strategies.
A fundamental aspect of circadian rhythm regulation is the entrainment of the SCN molecular clock by light. In the article "Photoperiodic Programming of the SCN and its Role in Photoperiodic Output," M. C. Tackenberg and D. G. McMahon provide an overview of the mechanisms involved in entrainment of SCN rhythms and effects on potential SCN output signals, ranging from initial research on SCN entrainment to modern advances including key studies from their research group. This review serves as a foundation for understanding how the circadian system is regulated by light, explores the relevance of circadian entrainment to human health, and highlights the most important outstanding questions.
Changes in day length have long been associated with alterations in mood. Understanding how seasonal changes in light cycles impact the circuitry involved in mood regulation is critical for development of effective preventative and treatment measures. The review article by A. Porcu et al. titled "Photoperiod-Induced Neuroplasticity in the Circadian System" explores how environmental light exposure impacts clock gene and neurotransmitter expression in the SCN, as well as brain regions involved in the regulation of mood, sleep, and motivational states, and thus provides a framework of how seasonal changes in day length may translate to specific molecules and neurocircuits involved in mood regulation.
Our understanding of the roles that glial cells play in a range of normal brain processes is a rapidly expanding field in neuroscience with broad clinical implications. The article "The Role of Mammalian Glial Cells in Circadian Rhythm Regulation" by D. Chi-Castaneda and A. Ortega reviews the emerging evidence for circadian rhythms in glial cells. The existence and potential role of molecular clock rhythms in glial cells is summarized, including roles in processes with clinical implications such as glutamate reuptake, synaptic plasticity, and immune function.
Growing evidence suggests that clock genes in brain regions other than the SCN are critically involved in regulating the timing of cellular signaling during information processing and memory formation. Two review articles highlight the importance of this function: the article "Circadian Regulation of Hippocampal-Dependent Memory: Circuits, Synapses, and Molecular Mechanisms" by K. H. Snider et al. reviews the current hypotheses on the regulation of memory processing by the molecular circadian clock, with a focus on ERK/MAPK and GSK3*β*. Furthermore, they address outstanding questions in the field while examining broader implications of circadian regulation of synaptic plasticity. The article "Clocking In Time to Gate Memory Processes: The Circadian Clock Is Part of the Ins and Outs of Memory" by O. Rawashdeh et al. summarizes pioneering studies from their group and others that have accumulated evidence for the key role of circadian Per1 regulation of MAPK, cAMP, and CREB during hippocampus-dependent memory processes.
The importance of developing improved preventative and therapeutic strategies for addiction has come to the forefront recently due to the growing opioid crisis and the obesity epidemic. The next two articles highlight the involvement of the circadian system in reward and addiction. The paper "Neural Mechanisms of Circadian Regulation of Natural and Drug Reward" by L. M. DePoy et al. provides an extensive review of the current hypotheses on how the circadian system is involved in reward regulation, ranging from rhythms of reward sensitivity to effects of sleep disturbances on reward processing and in turn effects of drug addiction on the circadian system. The paper focuses on key evidence of molecular clock regulation of the dopamine system in reward processes identified by their research group. The authors thus provide a framework to help address the question of how the circadian system can be used for the treatment of addiction.
A. K. Nobrega and L. C. Lyons ("*Drosophila*: An Emergent Model for Delineating Interactions between the Circadian Clock and Drugs of Abuse") highlight the potential of Drosophila as a model organism for identification of the complex interactions between the circadian system and addiction. This extensive review discusses the evidence for association of circadian disruption with addiction and stress in humans and highlights how the highly conserved mechanisms involved in these systems can be used in simple organisms to address key questions regarding the complex interactions of sleep, circadian rhythms, stress, and addiction. The potential of Drosophila as a model system in circadian rhythm research is exemplified by the Nobel Prize awarded to the characterization of the molecular circadian clock. The authors include a detailed summary of how this model system can be leveraged to develop novel therapeutic pharmacological targets for addiction.
Stress signaling is intricately involved with the circadian system, and the reciprocal interactions between these systems are critical for the maintenance of physiological homeostasis. Disruption of this interaction contributes to a wide array of health issues from mood disorders, cognitive dysfunction, metabolic disorders, and immune system dysfunction. Stress during critical periods of development is also believed to contribute to many of these disorders later in life. The article "Perinatal Programming of Circadian Clock-Stress Crosstalk" by M. Astiz and H. Oster reviews the current evidence for the effects of perinatal stress on the developing circadian system and the long-term health implications of these effects, potentially impacting impulsivity, stress regulation, metabolism and mood in adults.
The paper "Circadian Rhythms in Fear Conditioning: An Overview of Behavioral, Brain System, and Molecular Interactions" by A. Albrecht and O. Stork describes neural circuits involved in circadian regulation of fear memory and stress response, including recent evidence from their research group and others regarding the role of clock genes in brain areas involved in fear memory processing. The authors also discuss how the interaction of the circadian system with fear memory processing may be related to the development of psychiatric disorders characterized by excessive fear memory, including PTSD, and how this interaction may be leveraged for therapeutic purposes.
Moreover, C. A. Vadnie and C. A. McClung, in "Circadian Rhythm Disturbances in Mood Disorders: Insights into the Role of the Suprachiasmatic Nucleus," review the considerable evidence for the involvement of circadian rhythm dysfunction in mood disorders, ranging from clinical studies to animal models, including the clock delta 19 mutant mouse model established by this group, which models several critical aspects of bipolar disorder and provides insight into potential mechanisms involved in mania. Their article examines relationships of pharmacological therapies with the circadian system and focuses on the potential role of the SCN in the etiology of mood disorders.
We conclude this issue with a paper from S. Kaladchibachi and F. Fernandez ("Precision Light for the Treatment of Psychiatric Disorders") discussing the potential of light therapy for the treatment of psychiatric disorders. The authors provide a chronologic review of the history of studies focused on the effects of light on synchronizing the human circadian system and in the treatment of depression. They include a comparison of the effects of varying wavelengths, intensity, and duration of light exposure and propose a framework for the therapeutic use of light exposure.
We believe that the articles highlighted in this special issue provide a comprehensive overview of the current state of research on the role of the circadian system in the regulation of normal brain processes and hope that they will stimulate further studies into the circadian system contribution to cognitive, affective, and neurodegenerative brain disorders ranging from cognitive dysfunction, memory impairment, PTSD, major depression to bipolar disorder. A deeper understanding of the role the circadian system plays in brain function has the potential to allow us to leverage this system for preventative and therapeutic treatments.
*Harry Pantazopoulos* *Karen Gamble* *Oliver Stork* *Shimon Amir*
|
Entry from my live journal
Well, it's not the new year by Western standards, but it is by many Pagan standards, because we just had the shortest day of the year. So the days will get longer and longer until the wheel of the year turns again.
Last night I went to Mary Anne's lovely cookbook party and she asked me if I was writing. C asks me this all the time. And I had to say no, not really, but I felt more sheepish with her. So I started to tell her what little I had been doing. And then my main reason to myself for not writing ran off and so I chased after her. And then I talked more about motherhood and not writing. And Mary Anne said that now she'd decided not to be a mother, she felt this pressure to be a better writer, because she couldn't say to herself well I'll always be a good mother.
And at the same time at this party, I received lots of validation that I was at least for those few hours a good mother, because Special K was at her most mellow and charming. She talked a lot for the first time to a bunch of people she didn't know, even saying her first 3 syllable word, octopus. Usually in front of strangers, she confines herself to a few monosyllables and I look like an idiot for saying she talks a lot, which she does when we're alone. She stayed up for hours past her bedtime and was clearly tired, but enjoying having her proper place as the centre of the universe confirmed. Today she's tired and somewhat fractious, and she cried because Daddy went away while I held her. But I still remember last night.
And as I snuggled in bed with her this morning. I realised that while I have my doubts about my ability to raise another human being to adulthood without damaging them too much in the process, I think every mother does and on the whole I feel good about it. And it's when I do feel good about myself that Special K is at her best. |
Nothing
Gothika. Great title, that. Which means? Absolutely nothing. And that makes it apt for this pointless, un-gothic thriller. It's dark, all right, and the weather's bloody awful. But `gothik'? Nah.
What we've got here is a paranormal mystery-thriller, and a screamingly unimaginative one at that. Who is the flaming ghost-girl who keeps attacking Halle Berry with the bloody words ""Not Alone""? Is Berry really guilty of murder? And which weirdo decided to cast human jellybaby Charles S Dutton as her hubby? Berry must escape her loony bin and figure out who, or what, is responsible.
Well, let's start with screenwriter Sebastian Gutierrez, whose script rapidly deflates the movie's killer pitch to reveal a wholly disappointing motor: Dr Grey sees dead people. Thanks to an impressive lack of invention, the plot loses its grip on reality faster than Berry's character does. But French actor/director Mathieu Kassovitz doesn't seem too bothered. Making his first Hollywood studio pic, Kassovitz (who you'll remember as La Haine's ethno-punk helmer and Amélie's love interest) has clearly sussed that subtleties - - like, say, logic and plausibility - - aren't worth a damn. After all, the supernatural is always a great get-out clause when you haven't got a story that makes sense.
Gothika just wants to put bums on seats - - and send them flying back off. And this is a film with its fair share of ejector-seat scares. In fact, it's just one big jump movie. Packed with an empty charge, Kassovitz's direction just cues up the cheapskate shock-shots: his camera locks in on Berry's headlamp eyes, climactic smash-cuts wait to pounce, the spikes of music are never less than THIS LOUD. Tragically, these thoroughly unsurprising surprises are all Gothika's got. Even worse, this one-trick pony will make you jump. And no matter how early you see it coming, there's not a damn thing you can do about it.
This is partly down to DoP Matthew Libatique (Requiem For A Dream), who makes stylish work of warping mental hospital into haunted house, pepping the atmosphere with shadow-splashed cinematography and lots of flickering fluorescent bulbs. And while films like Gothika don't really have much to do with acting, Halle Berry gives the wild-eyed victim role her best shot, running around this schlock corridor with her best screaming shoes on. Rape-obsessed patient Penélope Cruz finally finds a role that fits her skewed accent (""He opened me like a flower of pain"") and Robert Downey Jr, playing Berry's shrink, just looks thankful to be working. He's coasting here, dodging cornball dialogue and getting out before the movie's super-lame endnote. And we can't blame him.
Instantly forgettable, Gothika's stuffed with bargain-bucket scares but it's merely a ghost of its premise by the final credits.
Blu-ray charles s dutton gothika halle berry mathieu kassovitz penélope cruz robert downey jr sebastian gutierrez Total Film
Log in using Facebook to share comments, games, status update and other activity easily with your Facebook feed. |
March 9, 2006
Sacramento State partnership
to
help parents guide kids toward college
Sacramento State has joined a statewide partnership to increase
the number of students who are eligible to attend college. The University is
among 23 campuses in the California State University system that will work with
the Parent Institute for Quality Education, or PIQE, to strengthen parental
involvement in preparing elementary and middle school students for higher education.
The University will partner with Sacramento-area schools to identify parents of Latino and other underserved populations to take part.
“One-third of our students come from families where one or both parents did not attend college,” said Sacramento State President Alexander Gonzalez. “We see our involvement with the PIQE program as a significant step in helping us attract more of these first-generation students by showing their parents that a college education is not only attainable, but vital.”
The San Diego-based Parent Institute for Quality Education offers nine-week training programs where parents learn how to improve performance in the classroom, enhance the parent/child relationship, motivate their child to stay in school and identify steps to help their child attend a college or university. Class sessions are taught in English, Spanish and 12 other languages, and are offered in morning and evening sessions. The course content is customizable for each parent and can address such issues as home-school collaboration, motivation and self-esteem, communication and discipline, drug and gang awareness, and college and career selection. The training will be provided by PIQE facilitators.
Children whose parents “graduate” from the Sacramento State-affiliated PIQE program will receive an identity card that reserves them a space at the University if they meet the minimum admission requirements when they graduate from high school.
Since its inception in 1987, PIQE has graduated more than 350,000 parents and guardians. It is credited with developing and implementing a model for increasing parent involvement in K-12 where parent participation has been difficult to achieve.
CSU Chancellor Charles B. Reed will provide $75,000 in funding over the next three years for CSU campuses statewide to partner with local schools. PIQE will match this amount and host schools will cover a portion of the program costs.
For more information, contact Phil Garcia, Sacramento State executive director for Governmental and Civic Affairs at (916) 278-6710. Media assistance is available by contacting the Sacramento State public affairs office at (916) 278-6156.
#### |
Getting your children into good brushing habits early on is a good way to avoid future health problems down the road. While cavities are quite common in young children, they should be treated immediately. If not, your child risks having serious damage to his or her teeth later on in life.
How Do Cavities Affect the Tooth Long Term?
Cavities develop when pockets of bacteria eat away at a tooth’s enamel. Cavities will not go away on their own and need to be filled by a dentist in order to prevent further erosion. According to WebMD, tooth decay that is not treated quickly can actually kill the tooth. That happens when the cavity reaches the middle or pulp of a tooth, which has nerves and blood vessels. When this occurs, an infection can set in causing severe pain and swelling. Treatment can involve a root canal or removal of the infected tooth.
How Do Cavities Affect the Gums Long Term?
Children with poor oral hygiene risk having gum disease when they get older. Without regular brushing, plaque can quickly build up. According to the National Institute of Dental and Craniofacial Research, plaque that is not removed can cause gum inflammation and bleeding. Periodontis is a serious form of gum disease, where the gums and connective tissues start to break down around the tooth. NIDCR recommends that you see a dentist if you experience the following symptoms:
- If your gums bleed easily or appear swollen
- If your teeth appear longer than normal
- If it hurts to eat
- If your teeth feel loose or painful
Brush Your Teeth On a Regular Basis
Brushing twice a day thoroughly can save you a lot of pain – and money – in the long run. Supervise young children to make sure they are brushing properly and developing good oral hygiene habits. |
An application of artificial neural network models to estimate air temperature data in areas with sparse network of meteorological stations.
In this work artificial neural network (ANN) models are developed to estimate meteorological data values in areas with sparse meteorological stations. A more traditional interpolation model (multiple regression model, MLR) is also used to compare model results and performance. The application site is a canyon in a National Forest located in southern Greece. Four meteorological stations were established in the canyon; the models were then applied to estimate air temperature values as a function of the corresponding values of one or more reference stations. The evaluation of the ANN model results showed that fair to very good air temperature estimations may be achieved depending on the number of the meteorological stations used as reference stations. In addition, the ANN model was found to have better performance than the MLR model: mean absolute error values were found to be in the range 0.82-1.72 degrees C and 0.90-1.81 degrees C, for the ANN and the MLR models, respectively. These results indicate that ANN models may provide advantages over more traditional models or methods for temperature and other data estimations in areas where meteorological stations are sparse; they may be adopted, therefore, as an important component in various environmental modeling and management studies. |
IMPORTANT NOTICE
NOT TO BE PUBLISHED OPINION
THIS OPINION IS DESIGNATED "NOT TO BE PUBLISHED ."
PURSUANT TO THE RULES OF CIVIL PROCEDURE
PROMULGATED BY THE SUPREME COURT, CR 7b.28(4)(C),
THIS OPINION IS NOT TO BE PUBLISHED AND SHALL NOT BE
CITED OR USED AS BINDING PRECEDENT IN ANY OTHER
CASE IN ANY COURT OF THIS STATE; HOWEVER,
UNPUBLISHED KENTUCKY APPELLATE DECISIONS,
RENDERED AFTER JANUARY 1, 2003, MAY BE CITED FOR
CONSIDERATION BY THE COURT IF THERE IS NO PUBLISHED
OPINION THAT WOULD ADEQUATELY ADDRESS THE ISSUE
BEFORE THE COURT. OPINIONS CITED FOR CONSIDERATION
BY THE COURT SHALL BE SET OUT AS AN UNPUBLISHED
DECISION IN THE FILED DOCUMENT AND A COPY OF THE
ENTIRE DECISION SHALL BE TENDERED ALONG WITH THE
DOCUMENT TO THE COURT AND ALL PARTIES TO THE
ACTION.
RENDERED : OCTOBER 23, 2008
NOT TO BE PUBLISHED
Q
,;Vixyxnut C~Vurf of ~R
2007-SC-000145-MR
JACK LAKE, JR. APPELLANT
ON APPEAL FROM KNOX CIRCUIT COURT
V HONORABLE RODERICK MESSER, JUDGE
NO. 05-CR-000018
COMMONWEALTH OF KENTUCKY APPELLEE
MEMORANDUM OPINION OF THE COURT
AFFIRMING
Appellant, Jack Lake, Jr., was convicted by a Knox Circuit Court jury of
manslaughter in the second-degree for the killing of Kenneth Vanover, and for being a
persistent felony offender in the first-degree . For these crimes, Appellant was
sentenced to twenty years imprisonment. Appellant now appeals to this Court as a
matter of right. Ky. Const. §110(2)(b).
Appellant's sole argument on appeal is that the trial court improperly excluded
evidence of specific prior acts of Vanover's violent behavior when drunk, thereby
depriving Appellant of the ability to establish his defense of self-protection . Appellant
argues that the evidence of Vanover's past drunken violent behavior was admissible
under Kentucky Rule of Evidence 404(a)(2), and that Kentucky Rule of Evidence 405(c)
authorizes the use of prior specific acts evidence to show that Vanover was the initial
aggressor . While we agree that evidence of a decedent's violent character trait would
be relevant on a claim of self-defense, we disagree with Appellant's assertion that proof
of specific acts of violent behavior is admissible to establish that trait, and therefore we
affirm Appellant's conviction .
On New Year's Eve, 2004, in front of Vanover's residence, a gunfight erupted
between Vanover and the Appellant . Several shots were fired. Both men were
wounded, but Vanover's wounds proved fatal . He died on the scene . Witness accounts
of how the shooting began varied . Appellant did not testify, but presented several
witnesses that testified Vanover was the initial aggressor. The Commonwealth
presented witnesses which testified Appellant shot first . Evidence that Vanover had
been drinking heavily was not disputed .
Vanover's son, Johnny, on cross-examination by Appellant's counsel, testified
that Vanover never fought when drunk and that he never got mad or held a grudge .
Defense counsel then asked Johnny if his father, while drunk, had ever hit anyone. The
prosecution objected to this question as irrelevant . Appellant argued that the question
was important to show that Vanover had a propensity toward drunken violence . Court
records Appellant wanted to admit into evidence indicated that Vanover had been
arrested several times for fighting while drunk. The court sustained the prosecutor's
objection and the evidence was excluded .' Appellant now argues that the evidence
regarding Vanover's prior drunken violence was admissible under KRE 404 (a)(2) and
' No avowal testimony was taken regarding the evidence of the prior drunken
behavior, so we do not know how Johnny would have answered that question . Nor
were the court records placed in the record by avowal . We therefore do not know what
they contain, but the issue of preservation is not before us and we decline to invoke it
sua sponte .
that by excluding that evidence, the trial court denied him the ability to properly present
his theory of self-defense .
"Generally, a homicide defendant may introduce evidence of the victim's
character for violence in support of a claim that he acted in self-defense or that the
victim was the initial aggressor ." Savlor v. Commonwealth , 144 S .W.3d 812, 815 (Ky.
2004). See also KRE 404 (a)(2); Johnson v. Commonwealth , 477 S.W .2d 159,161 (Ky.
1972); Robert G . Lawson, The Kentucky Evidence Law Handbook § 2.15[4][b], at 104
(4th ed. LexisNexis 2003). "However, such evidence may only be in the form of
reputation or opinion, not specific acts of misconduct." Savlor, 144 S .W .2d at 815.
Police records showing specific acts of violence committed by a victim have been held
to be inadmissible to show a propensity toward violence . Johnson , 477 S.W.2d at 161 .
An exception to the prohibition of admitting specific violent acts of the victim exists when
the prior acts are offered by a defendant to show that he knew of the prior acts and
therefore had reason to fear violence at the hands of the victim. Savlor, 144 S.W.2d at
815-816 ; Lawson, supra, sec 2.15[4][d] . In that situation, the evidence is not used to
show the propensity of the victim to act violently, but to show the defendant's state of
mind regarding his actions . Savlor, 144 S .W.2d at 816. Because Appellant did not
testify, there was no evidence of whether he had knowledge of the prior acts of the
victim.
Appellant, however, does not argue that he feared Vanover because he was
aware of his violent nature when drunk, and did not offer the evidence in question for
that reason. Appellant instead argues that KRE 405 allows him to admit specific
instances of Vanover's conduct via KRE 404(a)(2) to prove his propensity to violence
because, he asserts, the victim's aggression is an essential element to a self-defense
claim. KRE 405(c) ("In cases in which character or a trait of character of a person is an
essential element of a charge, claim, or defense, proof may also be made of specific
instances of that person's conduct") . In Sherroan v. Commonwealth, 142 S.W.3d 7, 21
(Ky. 2004), we held that :
[C]haracter constitutes an "essential" element only in rare circumstances .
`In criminal cases, it is rare (almost unheard of) to find that character is an
element in a charge or defense .' Lawson, supra, § 2.15 [6], at 108 n. 45
(quoting 1 Mueller & Kirkpatrick, Federal Evidence § 105 (2d ed .1994)) .
`KRE 404 deals with character used for the purpose of showing conduct in
conformity with character and thus has no applicability to situations in
which the character of a party is an element of a charge, claim, or
defense .' ld . at § 2.15[6], at 108 . Thus, if character is used as `a means
of proving conduct circumstantially,' it is not an essential element. 21
Am.Jur. P.O.F.3d § 6 (1993); Perrin v. Anderson, 784 F.2d 1040, 1045
(10th Cir. 1986). However, if the existence of the character trait
determines the rights and liabilities of the parties, then it is an essential
element and provable by specific instances of conduct. Professor Lawson
cites the following as examples of the latter: (1) a civil action for
defamation in which the defendant accused the plaintiff of being a `crook,'
with the defense being the truth of the statement, (2) a criminal action
involving extortionate credit transactions, `because of the need to prove
[the] victim's fear of the defendant-lender,' and (3) criminal cases where
the defense is entrapment, because of the need to prove the defendant's
predisposition to commit the charged offense . Lawson, supra , § 2.15[6],
at 108-09 .
In this matter, Appellant simply wants to prove that Vanover, at least when drunk, had a
violent disposition, from which one may infer that Vanover was the aggressor. That is
nothing more than character evidence which, while relevant, must be proven by
reputation or opinion evidence, not specific acts. See Haves v Commonwealth , 175
S.W.3d 574, 588 (Ky. 2005). A homicide victim's character trait for violent behavior is
not an essential element of the claim of self defense. It is not any element of self
defense. It is simply an evidentiary fact that, when it exists, is relevant to establish the
elements of self defense. Therefore, proof of that trait in the form of specific prior acts
cannot be admitted into evidence at trial through KRE 405. Sherroan, 142 S .W .3d at
21 ; 21 Am.Jur. P .O.F.3d § 6 (1993); Perrin, 784 F .2d at 1045.
The standard of review for admission of evidence is whether the trial court abused its
discretion . Commonwealth v. English , 993 S.Wd 941, 945 (Ky. 1999) . "The test for
abuse of discretion is whether the trial judge's decision was arbitrary, unreasonable,
unfair, or unsupported by sound legal principles." Id. For the foregoing reasons, we
its
conclude that the trial court did not abuse discretion in excluding evidence of
Vanover's previous violent acts.
The judgment and sentence of the Knox Circuit Court is affirmed .
All sifting . All concur.
COUNSEL FOR APPELLANT :
Randall L. Wheeler
Assistant Public Advocate
100 Fair Oaks Lane
Suite 302
Frankfort, KY 40601
COUNSEL FOR APPELLEE:
Jack Conway
Attorney General of Kentucky
Gregory C. Fuchs
Assistant Attorney General
Office of the Attorney General
Criminal Appellate Division
1024 Capital Center Drive
Frankfort, KY 40601
|
Harper Valley PTA The Academy of Television Arts & Sciences Foundation Presents 02:26 Tabs About From Wikipedia Harper Valley PTA (simply known as Harper Valley during its second season) is an American sitcom starring Barbara Eden which aired on NBC from January 16, 1981 to August 14, 1982. It is based on the 1978 film Harper Valley PTA, which was itself based on the 1968 song recorded by country singer Jeannie C. Riley, written by Tom T. Hall. Synopsis The series went on to flesh out the story in the song, as it told of the adventures of Stella Johnson (Barbara Eden), a single mother to teenager Dee (Jenn Thompson), who lived in the fictional town of Harper Valley, Ohio. The town was dominated by the namesakes of the founder, the Harper family, most prominently represented by the mayor, Otis Harper, Jr. (George Gobel). Mrs. Johnson's flouting of the small town's conventions, and exposure of the hypocrisy of many of its other residents, provided the series' humor. In the show's early episodes, Mrs. Johnson had been recently elected to the board of directors of the PTA and this was the source of most of the show's plots; later it was decided that this idea had been carried about as far as was practical and the PTA aspect was dropped from the show, which was then retitled Harper Valley. During this phase, Stella's relationship with Dee was more prominent and actor Mills Watson joined the cast as Stella's eccentric uncle, Winslow Homer Smith. Nicknamed Buster, he was an inventor, whose inventions never worked the way they were supposed to. Stella still did battle with the Reillys on occasion. At various times, Stella had to deal with her devious twin, Della Smith (played by Barbara Eden wearing a black wig), much as she had when she was on her more famous series, I Dream of Jeannie when she played her evil twin sister, Jeannie II. The show ran from January 1981 to August 1982 on NBC; it was later released into syndication to local stations briefly in the mid-1980s, even though there were too few episodes made for it to be normally syndicated. Cable television network TV Land showed reruns of the show in 2000. Highlights Barbara Eden in her role in the feature film and subsequent series Harper Valley PTA 07:08 Bruce Bilson on directing Harper Valley PTA which led to his directing the feature film Chattanooga Choo Choo 01:45 Leslie H. Martinson on directing Harper Valley PTA 00:24 Who talked about this show Bruce BilsonView Interview Bruce Bilson on directing Harper Valley PTA which led to his directing the feature film Chattanooga Choo Choo 01:45 Barbara EdenView Interview Barbara Eden in her role in the feature film and subsequent series Harper Valley PTA 07:08 Leslie H. MartinsonView Interview Leslie H. Martinson on directing Harper Valley PTA 00:24 |
Q:
Disable OPCache temporarily
I recently moved to PHP 5.4 and installed OPCache, it's very powerful!
How can I temporarily disable the cache?
I tried :
ini_set('opcache.enable', 0);
But it has no effect.
Thanks
A:
Once your script runs, it's too late to not cache the file. You need to set it outside PHP:
If PHP runs as Apache module, use an .htaccess file:
php_flag opcache.enable Off
If PHP runs as CGI/FastCGI, use a .user.ini file:
opcache.enable=0
And, in any case, you can use good old system-wide php.ini if you have access to it.
A:
opcache.enable is PHP_INI_ALL which means that ini_set() does work, but only for current request to disable OPcache caching for the remainder of scripts compiled in your current request. (You can't force enabling). It reverts back to the system default for other requests. By this stage, the request script will already have been cached, unless you do the ini_set in an auto_prepend_file script.
The system defaults (PHP_INI_SYSTEM) are latched as part of PHP system startup and can't be reread. So in the case of Apache for example, you need to restart Apache to change / reload these.
The .htaccess php_flag directives only apply if you are running mod_php or equivalent. They and .user.ini files are PHP_INI_PERDIR, which will also be latched at request activation.
Now to the Q that I think that you might be asking. If you have a dev system then the easiest way is to set opcache.enable=0 in the appropriate INI file and restart your webserver. Set it back to =1 and restart again when you are done.
Also consider (in the dev context) setting opcache.validate_timestamps=on and opcache.revalidate_freq=0. This will keep OPcache enabled but scripts will be stat'ed on every compile request to see if they are changed. This gives the best of both worlds when developing.
Also read up on the opcache.blacklist_filename directive. This allow you to specify an exclusion file, so if this contains /var/www/test, and the web service docroot is /var/www then any scripts in the /var/www/test* hierarchies will not be cached.
Hope this helps :)
A:
The best way i found in my case for disable opcache in a specific PHP file is : opcache_invalidate(__FILE__, true);
You also can reset all cache with PHP : opcache_reset();
|
You can access your RRSP for buying a home or returning to school, but in an emergency, use your TFSA
TORONTO, Jan. 21, 2016 /CNW/ - CIBC (TSX:CM) (NYSE: CM CIBC) -- Beware of the do's and don'ts around Registered Retirement Savings Plans (RRSPs) if you're planning to make a contribution or an early withdrawal, says Jamie Golombek, Managing Director, Tax & Estate Planning, CIBC Wealth Advisory Services.
"It's tempting to tap your RRSPs for an emergency, but RRSPs should generally be viewed as long-term savings tools," says Mr. Golombek, who recently issued a new report, Ten RRSP Hacks. "Borrowing money from your RRSP can make sense if you use the funds prudently to fund longer term goals that deliver their own return, such as buying a home, which hopefully increases your net worth, or investing in your education, which may help boost your earnings potential."
A Tax-Free Savings Account (TFSA), which allows you to recontribute any amounts withdrawn in a future year, may be a better option if you need more flexibility with your finances, he says.
Using an RRSP to help buy your first home
Under the Home Buyers' Plan (HBP), you can withdraw up to $25,000 from your RRSP to purchase a new home. Your spouse or partner may also be able to withdraw $25,000, for a combined total of $50,000. To take advantage of the HBP, you need to be a "first-time home buyer", which is generally defined as someone who hasn't owned a home in the past five years.
"This is a smart option for first-time home-buyers who are just pulling together their funds," says Mr. Golombek. "It can help you meet your down-payment requirements and save you a lot of money on a mortgage loan insurance you might otherwise need."
Amounts withdrawn under the HBP, however, must be repaid over a maximum of 15 years or the amount not repaid in a year is added to your income and becomes taxable.
Using an RRSP to go back to school
Through the Lifelong Learning Plan (LLP), you can borrow $10,000 a year, up to a total of $20,000, from your RRSP to finance your education. To take advantage of this plan, you must be enrolled or must have received an offer to enroll on a full-time basis in a qualifying Canadian or foreign educational institution. The funds can then be used for any purpose with no proof of expenses required, and must be repaid over a 10-year period. The LLP cannot be used to fund your child's education.
With both the HBP and LLP, there are no penalties for paying back the funds earlier than required. "If you repay early, you can benefit from the tax-free compounding of investment returns inside your RRSP as soon as possible," says Mr. Golombek.
Resist using your RRSP as an emergency fund
While an RRSP can help fund long-term financial goals, it should generally not be viewed as a go-to emergency fund, he says.
RRSP withdrawals are taxable at your marginal tax rate and are subject to immediate withholding taxes when withdrawn.
"If you dip into your RRSP for extra cash, you will not only be taxed, but you will lose the ability to recontribute those funds to your RRSP without generating additional room," says Mr. Golombek.
If you think you may have to draw on your long term savings before retirement, a TFSA may be the better option because it offers more flexibility.
A financial advisor or tax expert can help you determine the best option for your retirement savings. This year's deadline to make a RRSP contribution is Feb. 29.
"Regardless of whether you opt for an RRSP or a TFSA, the important thing is to save so you can meet your life goals today and in retirement,": Caroline van Hasselt, Director, External Communications, at 416-784-6699 or |