text
stringlengths
59
1.12k
your yard, they are likely the entrances to an elaborate tunnel maze carved into the earth beneath you. About all you can do, short of digging the whole mess up,
is to try and fill it in from the top side. First, fill it with a bunch of rocks and then soil—make sure to really pack it in. This will
make it difficult for the groundhog to reclaim its hole without a lot of work. You probably want to do this in tandem with other control methods such as trapping,
shooting, or fumigating to prevent the groundhog from just digging a new hole. Do some landscaping and build barriers. As with the control of many pests, it is advisable to
keep a yard free of brush, undercover, and dead trees. These types of features are attractive to groundhogs as cover, and without it, they are less likely to want to
spend time there. If you want to keep a groundhog out of an area, consider a partially buried fence. This will require a lot of work, but it is going
to help a lot. Make sure it extends up at least a meter, and that it is buried somewhere around 30 cm deep. Angle the fencing outward 90 degrees when
you bury it, and it will make digging under it a very daunting task for your furry friend. Try using fumigants to kill groundhogs. What is nice about this product
is that you can kill the animal and bury it all in one stroke. The best time to do this is in the spring when the mother will be in
the den with her still helpless young. Also, the soil will likely be damp, which helps a lot. You should definitely follow the directions on the package, but the way
they usually work is that you cover all but one exit, set off the smoke bomb, shove it down the hole, and quickly cover it up. Check back in a
day or two to see if there is any sign of activity, and if so, do it again or consider a different control method. It is important that you don't
do this if the hole is next to your house or if there is any risk of a fire. Poisons are a last resort. I am not a fan of
poisons because it is difficult to target what will eat said poison in the wild. Also, you are left with the issue of where the groundhog will die and how
bad it will smell if it is somewhere under your house. Or, if it is outside somewhere, who will be affected by eating the dead animal? Where does it end?
If you want to use poison, you're on your own. Use live traps. This is a good option for those of you not too keen on killing things. Try jamming
the door open and leaving bait inside for the taking a couple of times so they get used to it. Then, set it normally and you've got your groundhog (or
a neighborhood cat). Now what? The relocation is just as important; you need to choose a place that is far away from other humans and can likely support a groundhog.
Good luck. Predator urine. The idea is simple: form a perimeter around an area you want to protect. If the groundhog doesn't recognize the smell as a natural predator, it
is probably not going to work too well. Look for brands that have wolf and bobcat urine. Apply regularly, or as the manufacturer recommends. Remember, if it rains, the urine
has probably washed away. Repellents. Another popular method involves pepper-based repellents. These deter groundhogs by tasting horrible and burning their mucous membranes. You can do a perimeter with powdered cayenne
pepper or just apply it to the things you want spared in your garden. Be sure to wash your vegetables off before using them (which you should be doing anyway).
In my next few blogs, I will provide an overview of Voltage Source Converter (VSC) HVDC technology and its suitability for Smart Grids operation and control discussed. VSC HVDC is based upon transistor technology and was developed in the 1990′s.
The switching element is the Insulated Gate Bipolar Thyristor (IGBT), which can be switched on and off by applying a suitable voltage to the gate (steering electrode). Because of the more switching operations, and the nature of the semiconductor devices
itself, the converter losses are generally higher than those of HVDC classic converters. VSC HVDC is commonly used with underground or submarine cables with a transfer capacity in the range of 10 – 1000 MW, and is suitable to serve
as a connection to a wind farm or supply a remote load. VSC HVDC technology has very fast steer and control functionality and is suitable for meshed networks. It is characterised by compactness of the converter stations, due to the
reduced need for AC harmonic filters and reactive power compensation. Power flow reversal in VSC systems is achieved by reversal of the current, whereas in HVDC classic systems the voltage polarity has to change. An important consequence of this voltage
source behavior is the ability to use cheaper and easier to install XLPE cables, instead of the mass-impregnated cables that are needed for HVDC classic. Currently, only twelve VSC HVDC projects are in service. A few examples include: Estlink, which
connects Estonia to Finland (350 MW), and BorWin1, connecting an offshore wind farm to Northern Germany (400 MW). Both are equipped with ±150 kV submarine cables, and the Trans Bay project in California (400 MW) that consists of 90 km
±200 kV submarine cable. Most projects have submarine cable, but some projects include long lengths of underground cable, such as Murraylink (220 MW, 177 km underground cable), and Nord E.On 1 (400 MW, 75km underground cable). The 500 MW East-West
interconnector between Ireland and Great Britain, operating at ±200 kV, is scheduled to go into service in 2012. A 2000 MW 65 km cable interconnector ±320kV as part of the Trans European Network—between Spain and France—is scheduled for commissioning in
2013, and will represent the highest power rating for a VSC HVDC system installed at this time. Make sure to check back next Tuesday for my next blog on the comparison between HVDC classic and VSC HVDC. By: Peter Vaessen
The Operations Layer defines the operational processes and procedures necessary to deliver Information Technology (IT) as a Service. This layer leverages IT Service Management concepts that can be found in
prevailing best practices such as ITIL and MOF. The main focus of the Operations Layer is to execute the business requirements defined at the Service Delivery Layer. Cloud-like service attributes
cannot be achieved through technology alone and require a high level of IT Service Management maturity. Change Management process is responsible for controlling the life cycle of all changes. The
primary objective of Change Management is to eliminate or at least minimize disruption while desired changes are made to services. Change Management focuses on understanding and balancing the cost and
risk of making the change versus the benefit of the change to either the business or the service. Driving predictability and minimizing human involvement are the core principles for achieving
a mature Service Management process and ensuring changes can be made without impacting the perception of continuous availability. Standard (Automated) Change Non-Standard (Mechanized) Change It is important to note that
a record of all changes must be maintained, including Standard Changes that have been automated. The automated process for Standard Changes should include the creation and population of the change
record per standard policy in order to make sure auditability. Automating changes also enables other key principles such as: The Service Asset and Configuration Management process is responsible for maintaining
information on the assets, components, and infrastructure needed to provide a service. Critical configuration data for each component, and its relationship to other components, must be accurately captured and maintained.
This configuration data should include past and current states and future-state forecasts, and be easily available to those who need it. Mature Service Asset and Configuration Management processes are necessary
for achieving predictability. A virtualized infrastructure adds complexity to the management of Configuration Items (CIs) due to the transient nature of the relationship between guests and hosts in the infrastructure.
How is the relationship between CIs maintained in an environment that is potentially changing very frequently? A service comprises software, platform, and infrastructure layers. Each layer provides a level of
abstraction that is dependent on the layer beneath it. This abstraction hides the implementation and composition details of the layer. Access to the layer is provided through an interface and
as long as the fabric is available, the actual physical location of a hosted VM is irrelevant. To provide Infrastructure as a Service (IaaS), the configuration and relationship of the
components within the fabric must be understood, whereas the details of the configuration within the VMs hosted by the fabric are irrelevant. The Configuration Management System (CMS) will need to
be partitioned, at a minimum, into physical and logical CI layers. Two Configuration Management Databases (CMDBs) might be used; one to manage the physical CIs of the fabric (facilities, network,
storage, hardware, and hypervisor) and the other to manage the logical CIs (everything else). The CMS can be further partitioned by layer, with separate management of the infrastructure, platform, and
software layers. The benefits and trade-offs of each approach are summarized below. CMS Partitioned by Layer CMS Partitioned into Physical and Logical Table 2: Configuration Management System Options Partitioning logical
and physical CI information allows for greater stability within the CMS, because CIs will need to be changed less frequently. This means less effort will need to be expended to
accurately maintain the information. During normal operations, mapping a VM to its physical host is irrelevant. If historical records of a VM’s location are needed, (for example, for auditing or
Root Cause Analysis) they can be traced through change logs. The physical or fabric CMDB will need to include a mapping of fault domains, upgrade domains, and Live Migration domains.
The relationship of these patterns to the infrastructure CIs will provide critical information to the Fabric Management System. The Release and Deployment Management processes are responsible for making sure that
approved changes to a service can be built, tested, and deployed to meet specifications with minimal disruption to the service and production environment. Where Change Management is based on the
approval mechanism (determining what will be changed and why), Release and Deployment Management will determine how those changes will be implemented. The primary focus of Release and Deployment Management is
to protect the production environment. The less variation is found in the environment, the greater the level of predictability – and, therefore, the lower the risk of causing harm when
new elements are introduced. The concept of homogenization of physical infrastructure is derived from this predictability principle. If the physical infrastructure is completely homogenized, there is much greater predictability in
the release and deployment process. While complete homogenization is the ideal, it may not be achievable in the real world. Homogenization is a continuum. The closer an environment gets to
complete homogeneity, the more predictable it becomes and the fewer the risks. Full homogeneity means not only that identical hardware models are used, but all hardware configuration is identical as
well. When complete hardware homogeneity is not feasible, strive for configuration homogeneity wherever possible. Figure 2: Homogenization Continuum The Scale Unit concept drives predictability in Capacity Planning and agility in
the release and deployment of physical infrastructure. The hardware specifications and configurations have been pre-defined and tested, allowing for a more rapid deployment cycle than in a traditional data center.
Similarly, known quantities of resources are added to the data center when the Capacity Plan is triggered. However, when the Scale Unit itself must change (for example, when a vendor
retires a hardware model), a new risk is introduced to the private cloud. There will likely be a period where both n and n-1 versions of the Scale Unit exist
in the infrastructure, but steps can be taken to minimize the risk this creates. Work with hardware vendors to understand the life cycle of their products and coordinate changes from
multiple vendors to minimize iterations of the Scale Unit change. Also, upgrading to the new version of the Scale Unit should take place one Fault Domain at a time wherever
possible. This will make sure that if an incident occurs with the new version, it can be isolated to a single Fault Domain. Homogenization of the physical infrastructure means consistency
and predictability for the VMs regardless of which physical host they reside on. This concept can be extended beyond the production environment. The fabric can be partitioned into development, test,
and pre-production environments as well. Eliminating variability between environments enables developers to more easily optimize applications for a private cloud and gives testers more confidence that the results reflect the
realities of production, which in turn should greatly improve testing efficiency. The virtualized infrastructure enables workloads to be transferred more easily between environments. All VMs should be built from a
common set of component templates housed in a library, which is used across all environments. This shared library includes templates for all components approved for production, such as VM images,
the gold OS image, server role templates, and platform templates. These component templates are downloaded from the shared library and become the building blocks of the development environment. From development,
these components are packaged together to create a test candidate package (in the form of a virtual hard disk (VHD) that is uploaded to the library. This test candidate package
can then be deployed by booting the VHD in the test environment. When testing is complete, the package can again be uploaded to the library as a release candidate package
– for deployment into the pre-production environment, and ultimately into the production environment. Since workloads are deployed by booting a VM from a VHD, the Release Management process occurs very
quickly through the transfer of VHD packages to different environments. This also allows for rapid rollback should the deployment fail; the current release can be deleted and the VM can
be booted off the previous VHD. Virtualization and the use of standard VM templates allow us to rethink software updates and patch management. As there is minimal variation in the
production environment and all services in production are built with a common set of component templates, patches need not be applied in production. Instead, they should be applied to the
templates in the shared library. Any services in production using that template will require a new version release. The release package is then rebuilt, tested, and redeployed, as shown below.
Figure 3: The Release Process This may seem counter-intuitive for a critical patch scenario, such as when an exploitable vulnerability is exposed. But with virtualization technologies and automated test scripts,
a new version of a service can be built, tested, and deployed quite rapidly. Variation can also be reduced through standardized, automated test scenarios. While not every test scenario can
or should be automated, tests that are automated will improve predictability and facilitate more rapid test and deployment timelines. Test scenarios that are common for all applications, or the ones
that might be shared by certain application patterns, are key candidates for automation. These automated test scripts may be required for all release candidates prior to deployment and would make
sure further reduction in variation in the production environment. Knowledge Management is the process of gathering, analyzing, storing, and sharing knowledge and information within an organization. The goal of Knowledge
Management is to make sure that the right people have access to the information they need to maintain a private cloud. As operational knowledge expands and matures, the ability to
intelligently automate operational tasks improves, providing for an increasingly dynamic environment. An immature approach to Knowledge Management costs organizations in terms of slower, less-efficient problem solving. Every problem or new
situation that arises becomes a crisis that must be solved. A few people may have the prior experience to resolve the problem quickly and calmly, but their knowledge is not
shared. Immature knowledge management creates greater stress for the operations staff and usually results in user dissatisfaction with frequent and lengthy unexpected outages. Mature Knowledge Management processes are necessary for
achieving a service provider’s approach to delivering infrastructure. Past knowledge and experience is documented, communicated, and readily available when needed. Operating teams are no longer crisis-driven as service-impacting events grow
less frequent and are quickly resolves when they do occur. When designing a private cloud, development of the Health Model will drive much of the information needed for Knowledge Management.
The Health Model defines the ideal states for each infrastructure component and the daily, weekly, monthly, and as-needed tasks required to maintain this state. The Health Model also defines unhealthy
states for each infrastructure component and actions to be taken to restore their health. This information will form the foundation of the Knowledge Management database. Aligning the Health Model with
alerts allows these alerts to contain links to the Knowledge Management database describing the specific steps to be taken in response to the alert. This will help drive predictability as
a consistent, proven set of actions will be taken in response to each alert. The final step toward achieving a private cloud is the automation of responses to each alert
as defined in the Knowledge Management database. Once these responses are proven successful, they should be automated to the fullest extent possible. It is important to note, though, that automating
responses to alerts does not make them invisible and forgotten. Even when alerts generate a fully automated response they must be captured in the Service Management system. If the alert
indicates the need for a change, the change record should be logged. Similarly, if the alert is in response to an incident, an incident record should be created. These automated
workflows must be reviewed regularly by Operations staff to make sure the automated action achieves the expected result. Finally, as the environment changes over time, or as new knowledge is
gained, the Knowledge Management database must be updated along with the automated workflows that are based on that knowledge. The goal of Incident Management is to resolve events that are
impacting, or threaten to impact, services as quickly as possible with minimal disruption. The goal of Problem Management is to identify and resolve root causes of incidents that have occurred